Next Article in Journal
Preparation and Tribological Behaviors of Lubrication-Enhanced PEEK Composites
Previous Article in Journal
Robust LQR Control for PWM Converters with Parameter-Dependent Lyapunov Functions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

High-Detail Animation of Human Body Shape and Pose From High-Resolution 4D Scans Using Iterative Closest Point and Shape Maps

Virtual Reality Techniques Division, Institute of Micromechanics and Photonics, Faculty of Mechatronics, Warsaw University of Technology, ul. Św. Andrzeja Boboli 8, 02-525 Warsaw, Poland
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(21), 7535; https://doi.org/10.3390/app10217535
Submission received: 17 September 2020 / Revised: 16 October 2020 / Accepted: 22 October 2020 / Published: 26 October 2020

Abstract

:
In this article, we present a method of analysis for 3D scanning sequences of human bodies in motion that allows us to obtain a computer animation of a virtual character containing both skeleton motion and high-detail deformations of the body surface geometry, resulting from muscle activity, the dynamics of the motion, and tissue inertia. The developed algorithm operates on a sequence of 3D scans with high spatial and temporal resolution. The presented method can be applied to scans in the form of both triangle meshes and 3D point clouds. One of the contributions of this work is the use of the Iterative Closest Point algorithm with motion constraints for pose tracking, which has been problematic so far. We also introduce shape maps as a tool to represent local body segment deformations. An important feature of our method is the possibility to change the topology and resolution of the output mesh and the topology of the animation skeleton in individual sequences, without requiring time-consuming retraining of the model. Compared to the state-of-the-art Skinned Multi-Person Linear (SMPL) method, the proposed algorithm yields almost twofold better accuracy in shape mapping.

1. Introduction

Measurement and modeling of human body movement is a research field with many visual, medical, and monitoring applications. The data source initially used by scientists was video material. Many methods of pose tracking have been developed on the basis of image sequences, but a lack of information about 3D geometry has been a significant limitation. With the emergence of cheap RGBD sensors, there has been growing interest among scientists in the analysis of unidirectional 3D scans [1,2]. A number of publications have focused on the analysis of unidirectional RGBD data, but the related methods still yield low-resolution output data and suffer from a number of problems, particularly with respect to position estimation from partial views, due to sensor noise and the problem of occlusion [2,3]. The use of 4D scanners (3D scanners capable to capture geometry multiple times per second) is cost-intensive and, so, has remained reserved to a small group of researchers who have access to the necessary equipment. With the emergence of public scanning datasets, the topic of high-resolution reconstruction of motion and deformation of the human body has gained popularity. Reconstructing the movement and shape of a body on the basis of a sequence of 3D scans is a challenging task, due to the deformations of body shape deviating from rigid body dynamics and due to the amount and nature of the input data. The measurement data take the form of a series of point clouds or triangle meshes, where the vertices are not correlated with each other between individual moments in time (i.e., the number of vertices/points can change from frame to frame). In computer graphics applications, a homogeneous topology of geometry in consecutive sequence frames is a basic requirement, in view of rendering performance. Character animation also requires the definition of an animation skeleton for skinning the mesh.
The aim of this work is to transfer changes in both the body shape and the pose of the animation skeleton on the basis of 4D scans with high spatial and temporal resolution.
The main contributions of this work are as follows:
  • The development of a method to transfer human body surface deformations with geometrical accuracy better than current solutions (one of which is taken as a reference method);
  • The introduction of shape maps as a tool to transfer local deformations;
  • The incorporation of the Iterative Closest Point method (in the form of Point-to-Plane [4,5], with rejection and optimization, using the Levenberg–Marquadt solver [6,7]) with motion constraints for pose tracking in 4D scan sequences with high temporal and spatial resolution;
  • Enabling the selection of a different mesh topology and resolution, as well as a skeleton, for each measurement sequence, without needing to retrain the model;
  • Higher-resolution output meshes than state-of-the-art methods.
The remainder of the article is organized as follows: in Section 2, we give an overview of research on 4D data analysis and related topics. In Section 3.1, we provide details about the scanning data used in this work; in Section 3.2, we present the general outline of our algorithm; and, in Section 3.3 and Section 3.4, we describe the skeleton tracking and shape transfer methods, respectively, in detail. In Section 4, we present the results of an evaluation of the reconstruction quality of our method, compared to the reference method. The article concludes with a summary and description of future work, in Section 5.

2. Related Work

The appearance of low-cost RGBD cameras on the market (e.g., Kinect) has contributed to a growing interest in the subject of 4D data analysis by researchers worldwide [1,2,3,8,9,10,11,12,13,14]. The data obtained by such sensors are heavily noisy, but their low price and additional information about the depth correlated with the RGB image has created new opportunities. The authors of Reference [3] used three such sensors, together with pressure sensors placed in shoes, for pose estimation and registration of the triangle mesh. Barros et al. [13] presented a method for pose estimation based on scans of a human body, using two opposite RGBD cameras and a pre-defined skeleton model. Once the skeleton base point is initialized with Principal Component Analysis (PCA), the individual scan parts are iteratively segmented and fitted based on Expectation Maximization (EM). In the skeleton model used, the geometric relationships between individual skeleton nodes are strictly defined. Some works also focused on hand movement tracking [9,15,16]. Tsoli and Argyros [15] proposed a joint optimization method through energy minimization to track motion of hand in contact with a cloth surface. They track both deformations of the object and pose of the hand interacting with it, based on data from a Kinect2 sensor. One must also note the research in robotical environment geometry discovery [17,18,19,20]. One of the best-known groups of methods regarding this field is called Structure from Motion (SfM) [21,22], where geometry of the environment is computed from a series 2D images taken from different viewpoints. To this group belongs the algorithm developed by Glannarou and Yang [17] which focuses on the reconstruction of a surgical environment envisioned by using an endoscopic camera. The authors incorporated Unscented Kalman Filter, along with data from Inertial Measurement Unit (IMU), to achieve deformable Structure from Motion. A work by Gotardo et al. [18] solves a rigid SfM problem by estimating the smooth time-trajectory of a camera moving around an object. Introduction of a parametrization in the Discrete Cosine Transform, along with the assumption of smooth camera trajectory, enabled the researchers to perform a non-rigid SfM in the presence of occlusions.
To date, few works have focused on high-resolution data, due to high cost of obtaining such a system. One of the first collections of this type—which is available for a fee—is the CAESAR dataset (Civilian American and European Surface Anthropometry Resource Project) [23], containing about 4400 scans of different people. The pioneering work in the analysis of high-resolution 3D scans includes the SCAPE method (Shape Completion and Animation of People) [24], which is based on the model of body shape deformation as a function of the basic body shape of a given person and their pose at a specific moment. The authors presented the use of their method for the completion of unidirectional scans, as well as for the generation of mesh deformations in a sequence, based on a static scan of a person and a sequence of marker motions obtained by using the Motion Capture system. Another dataset of scans acquired with the use of a 3DMD scanner has been made publicly available, under the name Dynamic FAUST (DFAUST) [25], which has provided important motivation for research development in the field of high-resolution 4D scan analysis techniques [23,25,26,27,28,29,30,31,32,33]. This dataset consists of thousands of 3D scans of various people in different poses. The authors of the dataset developed it, using the Skinned Multi-Person Linear (SMPL) model [34], which is a trained model of various body shapes and their deformations with pose changes. Next, the authors used this model for pose and shape estimation from 2D images through the incorporation of a Convolutional Neural Network (CNN) [27,28]. The researchers also managed to develop an analogous model for infants (SMIL, Skinned Multi-Infant Linear model) using information from RGBD sequences [35,36]. Dyna [37] is a model which describes soft-tissue deformations, depending on the body shape and motion dynamics. A linear PCA sub-space is used for representation of soft-tissue deformations, whereby (as pointed out in [38]) the model has difficulty in reflecting highly non-linear deformations. DMPL (Dynamic-SMPL) [34] is based on a similar concept to Dyna, increasing the reconstruction quality by defining the model with vertices instead of triangles (as was the case in the Dyna model). SoftSMPL [38] improved on these models by introducing a non-linear deformation model and a new motion descriptor.
Further studies have involved deep learning for pose and shape estimation in 4D scans [38,39,40,41,42,43,44]. One of the directions in this area is the use of a Variational Autoencoder (VAE) to encode the 3D scan shape [40,41,42,43,44,45,46]. Litany et al. [42] used a VAE for body shape completion based on a partial view. In the aforementioned work, a neural network operates directly on the vertex positions in 3D space, instead of manually selected features (as in [44]). Jiang et al. [45,46] proposed a VAE-like network for learning the shape and pose of a human body from a sequence of triangle meshes with the same topology. For this purpose, they used measurement datasets (i.e., SCAPE, FAUST, MANO (hand Model with Articulated and Non-rigid defOrmations), Dyna, DFAUST, and CAESAR), applying to them a novel homogenous mesh registration method based on as-consistent-as-possible (ACAP) representations, proposed originally by Gao et al. [47]. Another work [48] extended the SMPL model with skeleton information, determining point features in the input data using PointNet++ and mapping them to the joints. Due to this operation, the authors improved the process of learning parameters in the SMPL model.
In summary, the existing methods for pose and shape tracking in human 3D scan sequences generally focus on the concept presented by the SMPL method, which uses a model trained on an extensive dataset of sample measurements. The works extending this model, such as SoftSMPL or the work of Jiang et al., have introduced improvements in model training, but still yield low-resolution output meshes. In these methods, it is also impossible to easily change the topology of the output mesh and the skeleton from sequence to sequence, as it is necessary to retrain the model on at least several thousand scans.

3. Materials and Methods

3.1. Materials

We used data from the Dynamic FAUST dataset [25], which presents sequences of scans of human body surface in motion in the form of a sequence of triangle meshes recorded at a frequency of 60 fps. The meshes are not correlated to each other in consecutive frames (i.e., the number of triangles and vertices changes from frame to frame). To evaluate the reconstruction quality, two sequences of 100 frames were used (Figure 1 and Figure 2).

3.2. Method Overview

The presented method uses a sequence of 3D scans of the human body in motion and translates this information into a computer animation of a virtual character. In the measurement dataset used, each measurement frame contains data in the form of triangle mesh; however, in the proposed process, data in the form of 3D point clouds can be used, as well.
The proposed method consists of three stages:
  • Pose tracking based on the displacement of individual segments of an input scan, using a variant of the Iterative Closest Point method (hereafter referred to as tracking);
  • Mapping of the scan shape in a sequence, using shape maps (hereafter referred to as mapping);
  • Morphing (registration) of a uniform template mesh to the generated shape maps (hereafter called morphing).
The input data for the algorithm are as follows:
  • A sequence of 3D scans of human body in motion, in the form of either meshes or point clouds;
  • An animation skeleton pose for the first frame;
  • A template mesh skinned to the animation skeleton for the first frame (we used automatic weights generated by the Heat Bone Weighting method [49] implemented in Blender).
The general outline of the proposed method is as follows (Figure 3): the tracking algorithm, based on two measurement frames—(t) and (t + 1)—and the skeleton for frame (t) calculates the segmentation of scan (t) for the skeleton and the pose for frame (t + 1). The scan segmentation in frame (t), together with the scan, is then used by the mapping and morphing algorithms to transfer the shape to the template mesh. After obtaining the shape maps for all the segments in a given frame, the template mesh is posed according to the skeleton in frame (t) and morphed based on the shape maps, resulting in the final triangle mesh for frame (t). This process is repeated for subsequent pairs of frames (t + 1, t + 2) (note that tracking produced the skeleton for frame (t + 1)), until the end of the measurement sequence is reached. After processing the entire scanning sequence, a series of triangle meshes with the same topology as the template mesh together with a series of skeleton poses is obtained as the output of the algorithm. The shapes of the meshes reflect the scan geometry in individual frames, while the skeleton matches the body pose change in subsequent frames of the sequence.

3.3. Skeleton Tracking

Skeleton tracking (Figure 4) operates on a sequence of the 3D scans and a skeleton for the first frame. Starting from the first frame, the current frame (t) is segmented based on the skeleton pose for that frame. Then, each of the generated segments is fitted to the point cloud of the next frame (t + 1) using the Iterative Closest Point (ICP) method. On the grounds of the obtained transform change of the segment, the transform of the relevant bone is updated by applying motion constraints specific to that bone. After analyzing all the segments (bones) in this way, the pose of the next frame (t + 1) is obtained. This procedure is iteratively repeated for successive frames of the sequence.
The purpose of segmentation (Figure 5) is to assign each point from the input scan to a skeleton bone. In the devised method, a point is assigned to the nearest bone which meets the cutoff plane, normal vector compatibility, and coplanarity conditions. Filtration with the cutoff plane consists of defining, for each bone (Bn) in the skeleton, a cutoff plane (Zn) hooked at the end of a preceding bone, Bn − 1, whose normal vector (NZn) is directed according to the sum of the vectors of the given bone (Bn) and the preceding bone (Bn − 1) (see Figure 6). A point (P) cannot be assigned to the bone (B) if it is behind this plane.
The normal vector parallelism condition allows us to exclude incorrect assignment of points from large segments (e.g., torso) to the bones of smaller segments (e.g., hands). According to this filtering, a point (P) may be assigned to bone (B) if the scalar product of its normal vector (N) and the radius vector (R) from the bone to the point is less than a defined threshold (Figure 7, Equation (1)):
R · N > α
where R is the radius vector from the bone to point (P), N is the normal vector in point (P), and α is the threshold.
The next condition, coplanarity, is defined as follows (Equation (2)):
|(R × B) · N| ≤ β
where R is the radius vector from the bone to point (P), N is the normal vector in point (P), B is the binormal vector in point (P), and β is the threshold.
Among the bones in the skeleton that passed the three above tests against the point (P), the nearest one is selected and assigned to P. After completing the operation for all points, the values for unassigned points are populated using a median filter. Finally, a median filter with a fixed minimum number of neighbors equal to half the number of vertices in the neighborhood is applied. With this last filtration, small groups of incorrectly assigned vertices are fixed.
On the grounds of the above segmentation, the scan in frame (t) is divided into fragments corresponding to individual bones. In order to track the bone transform in the next frame, the segment for the bone (B) is fitted to the scan in frame (t + 1) using the Iterative Closest Point method. Among the many variants of this algorithm, a Point-to-Plane [4,5] version with rejection and optimization using the Levenberg–Marquadt solver [6,7] was chosen. For the estimation of the correspondence between the set of segment points and the next frame scan, a random subset of the segment’s point count is used. After establishing pairs of corresponding points in both clouds, pairs with distance further than a defined threshold are rejected. The choice of the Point-to-Plane objective function, as pointed out by Rusinkiewicz [5], ensures faster convergence than the standard Point-to-Point algorithm.
After fitting the segment to the scan in the next frame, the bone position is updated based on the input and output transforms of the segment from ICP, taking into account the motion constraints of the given bone. We consider three types of constraints: no constraint, translation constraints, and kinematic chain constraints. The first type is applicable in a case of the hip segment, from which the entire skeleton tree begins. The skeleton root can move freely, reflecting the change in position and orientation of the scanned subject. The second type, translation constraints, apply to all bones located lower in the skeleton tree, as the continuity of the bone chain must be maintained. The last type of constraint finds application in the case of leg and arm helper bones which do not take part in the segmentation process. In this case, position changes of the bone start point are allowed, but must maintain the distance to the preceding spine bone. In this way, the bone moves on the surface of a sphere with radius equal to the helper bone length and the helper bone transform is adjusted to close the skeleton chain.

3.4. Shape Mapping and Morphing

The transfer of deformations consists in the usage of shape maps, the values of which are computed based on the scan, where a template mesh is deformed according to these maps. The shape map construction schema is presented in Figure 8. The shape map of a given segment is defined by a parametric mapping in the local co-ordinate system of the segment (e.g., a spherical mapping or capsule mapping [50,51,52]). The parametric mapping converts the 3D co-ordinates of the segment measurement points into 2D co-ordinates on the shape map and the map value at these points. The value of the shape map at a given point corresponds to the third co-ordinate of the parametric mapping (i.e., the distance to the center of the mapping; Equation (3)):
u,v,r = f(x,y,z,C)
where u,v are the shape map co-ordinates, r is the value of the shape map, f is the parametric mapping, x,y,z are the 3D coordinates of the segment point, and C is the mapping center.
Direct meaning of u,v co-ordinates depends on the parametric mapping of choice. In the example of spherical mapping as a parametric mapping, the (u,v) co-ordinates denote polar angle and azimuthal angle, respectively (Figure 9). In this case function f unrolls to following (Equation (4)):
u = c o s 1 z r v = t a n 1   y x r =   x C x 2 + y C y 2 + z C z 2
After mapping all segment points from frame (t), map values in the entire domain are computed. The shape map is divided into a grid with a predefined resolution and, for each cell, the map value is established by averaging values from points falling into this cell. Next, in order to fill missing values in cells without measurement points, a mipmapping technique is used, which assigns the value of a mipmap with lower resolution for such values. This helps to avoid artifacts and holes in the morphed mesh, simultaneously maintaining the high resolution of the basis shape map and, thus, keeping the reconstruction error low. Finally, a Gaussian filter is applied to the shape map, in order to achieve smoother transitions between grid cells. Figure 10 presents a probed shape map for the arm segment in the time frame t0 of the “jumping jacks” sequence.
In order to achieve a mesh with homogenous topology in the entire sequence as output, we used a template character mesh and morphed it according to the shape maps. The template mesh had a skin correlated with the skeleton from the tracking algorithm. The skin is defined using default values for the bones, in compliance with the Heat Bone Weighting method [49] (implemented in an open-source 3D-creation software called Blender). First, the segmentation of the mesh to the skeleton in pose t0 is performed, in order to know which shape map should be used to morph each vertex from the template. Next, the mesh is posed based on the pose in frame (t), using the skinning method. The initially fitted mesh prepared this way is morphed according to the value of the shape map of the given segment (Figure 11). The vertices of the template mesh are mapped into the parametric space, and then a new value of the distance in this point is read, and a reverse mapping is applied (i.e., from the parametric to the 3D space). In order to reduce visible faults on the output mesh in the transition areas between individual grid cells of the shape map, the map value at point (u,v) is established through bilinear interpolation of values of the cells adjacent to (u,v). It should be noted that the resolution of the template mesh is independent of the resolution of the shape maps, such that it is possible to obtain preview results on a lower resolution mesh, with significantly smaller calculation time.
The process above is repeated for each frame, resulting in a series of morphed meshes with topology and vertex count congruent to the template mesh. Finally, these meshes can be applied as shape keys to the template mesh. In Figure 12, we present reconstructions of frames selected from the test sequence.

4. Results and Discussion

An evaluation of the fitting error was made, in order to determine the reconstruction quality between the output mesh and the input scan. The fitting error was defined as distance from a given vertex of the input scan to the nearest vertex in the reconstructed mesh. For each frame of the analyzed sequence, the reconstruction error for all vertices in the scan was computed (Table 1, full listing included in Appendix A) and then, based on that data, the average and median error for each frame was also calculated. Furthermore, in order to compare the results of our method with a state-of-the-art reference method, we carried out the analogous procedure for the reconstructed meshes shared by the authors of the Skinned Multi-Person Linear (SMPL) model method. In Figure 13 and Figure 14, we present the average and median reconstruction errors for both methods in subsequent frames of testing sequence. Figure 15 depicts reconstruction error histograms for both methods in chosen frames of both sequences. Additionally, illustrations allowing for visual assessment of the reconstruction quality in chosen frames were prepared (compare Figure 12 and Figure 16a). Figure 16b shows influence of the template mesh resolution on result reconstruction.
On the grounds of the obtained error statistics, it can be seen that the proposed method features a better reconstruction quality than the reference method. The reconstruction errors of the proposed method were fundamentally lower than those in the case of SMPL, whereas the error distribution was shifted towards smaller values, indicating better reconstruction in a wider part of the mesh. The course of the average error in the “jumping jacks” sequence increased for frames 7065–7090 (Figure 13), where the motion was significantly faster and the arms were raised almost vertically (compare to Figure 1), increasing reconstruction difficulty in terms of arm, shoulder, and head occlusions. The median error did not change much, confirming the local character of increase of the reconstruction error in the mentioned frames. Figure 17 shows that our algorithm performs well with different or non-ideal skinning weights.
The weaknesses of the proposed method include the uneven distribution of the resulting vertices in the end areas of the parametric mapping for given segments (Figure 18a). The reason for this lies in the mapping function, where the co-ordinates thicken around the poles, thus bringing more vertices into a similar area on the shape map. Moreover, in some frames, the output mesh was distorted due to data loss in the shape map (Figure 18b). Reconstruction based on such an erroneous map introduced artifacts in the resulting mesh. This problem was minimized by including a small fragment of adjacent segments into the shape map calculation; however, this solution is limited by the ability of the parametric mapping to map complex geometry into the 2D parametric space, without overlapping. Regarding pose tracking, one must admit, that the usage of the ICP method involves a limitation in robustness to input measurement frequency. In the presence of input data captured at a much smaller frequency than 60 fps, along with fast movement of the scanned person, the pose tracking algorithm may return inadequate pose. However, in this paper, we focus on high-resolution data and reconstruction accuracy rather than robustness to low-frequency data.

5. Conclusions and Future Work

In summary, in this paper, we have presented a method of analysis for human body 3D scan sequences that allows for the generation of skeletal animations, along with body shape deformation animations. The algorithm consists of three stages: pose tracking, shape mapping, and template mesh morphing. We performed quality tests on the obtained morphed meshes for two hundred frames from two sequences of the Dynamic FAUST dataset. We compared our method to the SMPL method by performing a similar test. The resulting reproduction quality of our method was demonstrated by the almost twice-smaller reconstruction error, compared to that of the state-of-the-art SMPL method. Moreover, an important advantage of our method is the possibility to use a template with any mesh topology and resolution, thanks to which one can generate an efficient preview animation with less detail. Furthermore, the skeleton used can also be customized.
The directions of future improvement are the transition areas between subsequent segments, where artifacts caused by the loss of data in shape map calculation (only points from a given segment and a small surrounding area participate in the shape map calculation) and vertex squeezing after parametric mapping of the shape map may appear. Furthermore, with small resolutions of the shape map, modest faults appear on the surface of the reconstructed mesh that are caused by value averaging in shape map cells.
In the future, we would like to achieve even better reconstruction quality by replacing averaging and mipmapping in the shape maps with polynomial approximation or another mathematical model. This may result in a smoother surface of the reconstructed mesh and better handling of areas without measurement points. The second field for development in the future is the template mesh morphing algorithm. Our goal will be to achieve a smoother output mesh though the application of a more advanced method for fitting the template mesh to the shape map. Last but not least, we would like to pursue more tests on robustness of the pose tracking algorithm with lower frequency capture.

Author Contributions

Conceptualization, M.N.; methodology, M.N.; software, M.N.; validation, M.N.; writing—original draft preparation, M.N. and R.S.; writing—review and editing, M.N. and R.S.; visualization, M.N.; supervision, R.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

We would like to thank Federica Bogo, Javier Romero, Gerard Pons-Moll, Michael J. Black, and Max Planck Institute for Perceiving Systems for making their scanning dataset named Dynamic FAUST freely available to other researchers.

Conflicts of Interest

R.S. declares no conflict of interest. M.N. is a contractor at Platige Image.

Declarations

Source code will not be available; data will not be available.

Appendix A

Table A1. Full reconstruction accuracy results of our proposed method and the SMPL method in the “jumping jacks” sequence.
Table A1. Full reconstruction accuracy results of our proposed method and the SMPL method in the “jumping jacks” sequence.
Frame No.Our Average Error SMPL Average Error Our Median ErrorSMPL Median Error
70253.908298.426813.401278.01723
70264.658828.392093.501107.99645
70274.606108.354903.494167.99738
70284.721788.435763.501697.99896
70294.622648.364263.492147.99611
70304.537098.360273.511087.99254
70314.595678.358083.495757.98607
70324.619338.356853.496937.99120
70334.612808.360753.498897.98664
70344.760288.438243.507207.98668
70354.613548.366953.494517.99287
70364.655048.366603.494157.98689
70374.645058.369133.506927.99316
70384.811008.452733.504437.99300
70394.660178.370643.514157.99213
70404.817148.439713.521527.99282
70414.695078.378883.531727.98340
70424.726808.369013.545467.98286
70434.728908.372753.553437.98082
70444.742428.385303.573127.99138
70454.762748.381823.573487.98709
70464.790988.382663.593227.98660
70474.806498.410823.591688.01049
70484.971458.485813.600018.02654
70494.769948.435163.577338.04633
70504.775868.442133.590058.06910
70514.763008.460833.606468.08881
70524.976388.560803.617678.12371
70534.772808.489203.602888.11753
70545.041088.621763.600148.16472
70554.862918.519523.594008.16442
70564.933838.531773.610468.16893
70575.126128.534523.666808.17435
70584.993988.550863.630928.19434
70595.264448.555783.676868.17435
70605.158828.588643.639318.20919
70615.402348.591333.670568.20371
70625.642798.572143.700978.16757
70635.699038.565633.726658.16115
70645.651428.569943.734638.17997
70655.665498.592143.748998.20935
70665.638488.564623.758178.18326
70675.685098.548983.777958.17640
70685.577618.574823.765218.21208
70695.579648.568473.793938.20377
70705.590758.580033.798938.20812
70715.727948.553843.818918.18958
70725.948838.507023.848188.14496
70735.945478.494593.858118.12424
70745.944968.462383.850968.09743
70755.898128.436623.863748.06921
70765.672608.472953.857968.10853
70775.765318.482723.855818.11858
70785.771598.443383.868988.09267
70795.831368.450783.862698.08869
70805.628038.445163.857258.09483
70815.667098.437353.856448.09417
70825.967118.422653.869538.08210
70836.061658.394403.879518.04085
70846.151678.399253.875088.04882
70856.263968.401323.871288.03688
70866.118048.436393.857738.07872
70875.940428.470263.830298.09297
70885.708268.481943.804988.11072
70895.666788.507803.805808.14121
70905.688338.519143.795078.14289
70915.587968.524033.776258.14797
70925.404068.537473.752478.16126
70935.318068.523523.731538.13595
70945.320318.528463.724208.14968
70955.211108.551483.719968.16910
70965.131298.559143.708128.17256
70975.038458.583433.679238.19739
70984.829688.585443.701288.19974
70994.967318.571703.730678.18905
71004.686238.549033.695638.15710
71014.517338.535333.674928.15793
71024.379308.522913.646078.15096
71034.447588.452443.680638.09297
71044.550308.359763.683937.99483
71054.634738.299283.703247.92592
71064.791268.261993.728947.87648
71074.953718.221923.745077.82597
71085.032738.236753.741657.84297
71095.017048.259253.735847.87260
71104.983818.290353.736657.92534
71114.933718.298783.738327.93425
71124.914148.318543.735577.96436
71135.129538.345953.741347.99409
71145.164008.359903.730118.00373
71155.483368.346323.736067.98486
71165.485338.372323.727598.00813
71175.623118.393713.763428.02854
71185.620828.440973.741968.07464
71195.745968.434223.743758.06058
71205.855218.443493.751848.05165
71215.899668.462173.765428.07147
71225.848438.482393.768928.09165
71235.704378.496283.756658.10090
7124 5.78201 8.51798 3.76182 8.11418
Table A2. Full reconstruction accuracy results of our proposed method and the SMPL method in the “punching” sequence.
Table A2. Full reconstruction accuracy results of our proposed method and the SMPL method in the “punching” sequence.
Frame No.Our Average Error SMPL Average Error Our Median ErrorSMPL Median Error
58074.107558.345203.517657.98764
5808 4.382418.340043.575687.97401
5809 4.372878.346633.562037.98234
5810 4.376858.360353.567378.00188
5811 4.355418.350373.557537.98717
5812 4.356828.348693.562507.98448
5813 4.360038.353953.574397.99814
5814 4.359808.356083.564667.98632
5815 4.345688.359933.552967.99955
5816 4.331708.357223.552798.00542
5817 4.332018.365893.553977.99869
5818 4.359268.360603.552317.99988
5819 4.357328.364333.547877.99590
5820 4.373038.375073.557848.00605
5821 4.345128.369593.539128.00202
5822 4.334768.368213.534228.00523
5823 4.338138.384983.540358.01786
5824 4.343828.382993.539158.02273
5825 4.377318.387783.548348.03060
5826 4.405198.406783.554178.04400
5827 4.441128.412163.558698.05160
5828 4.497958.417993.579448.04894
5829 4.547778.419533.598498.06309
5830 4.582018.421583.608578.05128
5831 4.772508.535193.619968.06929
5832 4.658238.420683.625358.04767
5833 4.688748.416483.630978.05728
5834 4.689768.425373.640878.06205
5835 4.703588.420313.639158.06210
5836 4.714118.425173.646748.05974
5837 4.695148.433463.651638.07723
5838 4.699918.454293.654958.10936
5839 4.667908.463883.656218.11174
5840 4.587208.465643.651008.10221
5841 4.525118.471023.650058.10613
5842 4.456368.472353.642208.11431
5843 4.435878.457853.642208.08570
5844 4.570288.447423.635968.07991
5845 4.922518.569883.661158.11222
5846 4.895188.473883.661048.09620
5847 4.970828.476073.672138.09924
5848 4.964228.463413.653478.08369
5849 4.956768.445443.645698.08262
5850 5.036388.543333.643458.09801
5851 4.818518.453323.623948.10035
5852 4.788948.449233.625098.08438
5853 4.721038.456183.628408.09312
5854 4.648998.460893.624748.10440
5855 4.608878.443533.610388.07385
5856 4.767398.562833.623168.08879
5857 4.597258.437383.623958.07007
5858 4.567288.426903.612298.06597
5859 4.570828.423713.618818.06411
5860 4.582058.414893.623788.03821
5861 4.616198.426113.623938.06020
5862 4.609168.430863.628228.06449
5863 4.577138.436303.616758.08429
5864 4.569608.451053.631688.08695
5865 4.530118.447643.622168.08853
5866 4.524408.461953.628518.10513
5867 4.533428.475703.629788.10837
5868 4.617398.464893.641508.11115
5869 4.704268.488083.647098.12363
5870 4.915998.506693.682138.13546
5871 5.074718.537953.689358.15588
5872 5.212868.503623.689928.14619
5873 5.182918.522753.687388.15126
5874 5.263208.506833.671758.14706
5875 5.214748.483103.669208.11805
5876 5.178388.472563.660608.11862
5877 5.129748.477743.652848.11165
5878 5.091928.486063.654868.12421
5879 5.027948.492483.642448.11425
5880 5.171448.627343.646008.12136
5881 4.882888.469273.662478.10742
5882 4.768878.467563.652558.09359
5883 4.628958.455313.640468.09549
5884 4.484908.443113.628358.07985
5885 4.367668.428293.624448.06102
5886 4.380708.434203.631928.07419
5887 4.454168.427483.643138.04862
5888 4.582748.434703.665138.07614
5889 4.671778.443643.684648.09177
5890 4.735288.452863.690608.09554
5891 4.811608.457423.700358.09135
5892 5.022768.597603.715068.12198
5893 4.896698.466473.739848.11216
5894 4.927138.474603.747428.11257
5895 4.943028.476173.750758.12800
5896 4.968428.467443.754638.11609
5897 4.903928.470573.750458.11155
5898 4.910288.453583.755868.10087
5899 4.884728.439443.764218.08404
5900 5.042298.560733.759408.08762
5901 4.835538.443953.759348.08745
5902 4.757708.426433.734918.05841
5903 4.812588.583913.700838.08321
5904 4.467188.447883.642818.08399
5905 4.332908.476473.617788.10652
5906 4.34487 8.50507 3.63095 8.14041

References

  1. Chen, L.; Wei, H.; Ferryman, J. A survey of human motion analysis using depth imagery. Pattern Recognit. Lett. 2013, 34, 1995–2006. [Google Scholar] [CrossRef]
  2. Xia, S.; Gao, L.; Lai, Y.-K.; Yuan, M.-Z.; Chai, J. A Survey on Human Performance Capture and Animation. J. Comput. Sci. Technol. 2017, 32, 536–554. [Google Scholar] [CrossRef] [Green Version]
  3. Zhang, P.; Siu, K.; Zhang, J.; Liu, C.K.; Chai, J. Leveraging Depth Cameras and Wearable Pressure Sensors for Full-body Kinematics and Dynamics Capture. ACM Trans. Graph. 2014, 33, 1–14. [Google Scholar] [CrossRef]
  4. Chen, Y.; Medioni, G. Object modeling by registration of multiple range images. In Proceedings of the 1991 IEEE International Conference on Robotics and Automation, Sacramento, CA, USA, 9−11 April 1991. [Google Scholar]
  5. Rusinkiewicz, S.; Levoy, M. Efficient variants of the ICP algorithm. In Proceedings of the Third International Conference on 3-D Digital Imaging and Modeling, Quebec City, QC, Canada, 28 May–1 June 2001. [Google Scholar]
  6. Levenberg, K. A method for the solution of certain non-linear problems in least squares. Q. Appl. Math. 1944, 2, 164–168. [Google Scholar] [CrossRef] [Green Version]
  7. Marquardt, D.W. An Algorithm for Least-Squares Estimation of Nonlinear Parameters. J. Soc. Ind. Appl. Math. 1963, 11, 431–441. [Google Scholar] [CrossRef]
  8. Habermann, M.; Xu, W.; Zollhöfer, M.; Pons-Moll, G.; Theobalt, C. LiveCap: Real-time human performance capture from monocular video. ACM Trans. Graph. 2019, 38, 1–17. [Google Scholar] [CrossRef]
  9. Tzionas, D.; Srikantha, A.; Aponte, P.; Gall, J. Capturing Hand Motion with an RGB-D Sensor, Fusing a Generative Model with Salient Points. In Proceedings of the 36th German Conference on Pattern Recognition, Münster, Germany, 2−5 September 2014. [Google Scholar]
  10. Ren, C.; Prisacariu, V.; Kähler, O.; Reid, I.; Murray, D. Real-Time Tracking of Single and Multiple Objects from Depth-Colour Imagery Using 3D Signed Distance Functions. Int. J. Comput. Vis. 2017, 124, 80–95. [Google Scholar] [CrossRef] [Green Version]
  11. Gao, Z.; Yu, Y.; Zhou, Y.; Du, S. Leveraging two kinect sensors for accurate full-body motion capture. Sensors 2015, 15, 24297–24317. [Google Scholar] [CrossRef] [Green Version]
  12. Hogue, A.; Gill, S.; Jenkin, M. Automated Avatar Creation for 3D Games. In Proceedings of the 2007 Conference on Future Play, Toronto, Canada, 15–17 November 2007. [Google Scholar]
  13. Barros, J.M.D.; Garcia, F.; Sidibé, D. Real-Time Human Pose Estimation from Body-Scanned Point Clouds. In Proceedings of the International Conference on Computer Vision Theory and Applications, Berlin, Germany, 11–14 March 2015. [Google Scholar] [CrossRef] [Green Version]
  14. Wei, X.; Zhang, P.; Chai, J. Accurate Realtime Full-Body Motion Capture Using a Single Depth Camera. ACM Trans. Graph. 2012, 31, 1–12. [Google Scholar] [CrossRef] [Green Version]
  15. Tsoli, A.; Argyros, A.A. Joint 3d tracking of a deformable object in interaction with a hand. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 484–500. [Google Scholar]
  16. Tompson, J.; Stein, M.; Lecun, Y.; Perlin, K. Real-time continuous pose recovery of human hands using convolutional networks. ACM Trans. Graph. 2014, 33, 1–10. [Google Scholar] [CrossRef]
  17. Giannarou, S.; Zhang, Z.; Yang, G.Z. Deformable structure from motion by fusing visual and inertial measurement data. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, Portugal, 7–12 October 2012. [Google Scholar]
  18. Gotardo, P.F.U.; Martinez, A. Computing Smooth Time Trajectories for Camera and Deformable Shape in Structure from Motion with Occlusion. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2051–2065. [Google Scholar] [CrossRef] [PubMed]
  19. Akhter, I.; Sheikh, Y.; Khan, S.; Kanade, T. Nonrigid Structure from Motion in Trajectory Space. In Proceedings of the Advances in Neural Information Processing Systems 21, Vancouver, BC, Canada, 8–11 December 2008. [Google Scholar]
  20. Olsen, S.I.; Bartoli, A. Implicit Non-Rigid Structure-from-Motion with Priors. J. Math. Imaging Vis. 2008, 31, 233–244. [Google Scholar] [CrossRef]
  21. Bregler, C.; Hertzmann, A.; Biermann, H. Recovering non-rigid 3D shape from image streams. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Hilton Head Island, SC, USA, 15 June 2000. [Google Scholar] [CrossRef] [Green Version]
  22. Tomasi, C.; Kanade, T. Shape and motion from image streams under orthography: A factorization method. Int. J. Comput. Vis. 1992, 9, 137–154. [Google Scholar] [CrossRef]
  23. Tsoli, A.; Loper, M.; Black, M.J. Model-based anthropometry: Predicting measurements from 3D human scans in multiple poses. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision, Steamboat Springs, CO, USA, 24–26 March 2014. [Google Scholar] [CrossRef] [Green Version]
  24. Anguelov, D.; Srinivasan, P.; Koller, D.; Thrun, S.; Rodgers, J.; Davis, J. SCAPE: Shape Completion and Animation of People. ACM Trans. Graph. 2005, 24, 408–416. [Google Scholar] [CrossRef]
  25. Bogo, F.; Romero, J.; Pons-Moll, G.; Black, M.J. Dynamic FAUST: Registering human bodies in motion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  26. Bogo, F.; Romero, J.; Loper, M.; Black, M.J. FAUST: Dataset and evaluation for 3D mesh registration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
  27. Bogo, F.; Kanazawa, A.; Lassner, C.; Gehler, P.; Romero, J.; Black, M.J. Keep it SMPL: Automatic Estimation of 3D Human Pose and Shape from a Single Image. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016. [Google Scholar]
  28. Huang, Y.; Bogo, F.; Lassner, C.; Kanazawa, A.; Gehler, P.V.; Romero, J.; Akhter, I.; Black, M.J. Towards Accurate Marker-less Human Shape and Pose Estimation over Time. In Proceedings of the 2017 International Conference on 3D Vision (3DV), Qingdao, China, 10–12 October 2017. [Google Scholar]
  29. Loper, M.; Mahmoody, N.; Black, M.J. MoSh: Motion and Shape Capture from Sparse Markers. ACM Trans. Graph. 2014, 33, 1–13. [Google Scholar] [CrossRef]
  30. Bogo, F.; Black, M.J.; Loper, M.; Romero, J. Detailed Full-Body Reconstructions of Moving People from Monocular RGB-D Sequences. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 2300–2308. [Google Scholar] [CrossRef]
  31. Hirshberg, D.; Loper, M.; Rachlin, E.; Black, M.J. Coregistration: Simultaneous Alignment and Modeling of Articulated 3D Shape. In Proceedings of the 12th European Conference on Computer Vision, Florence, Italy, 7–13 October 2012. [Google Scholar] [CrossRef] [Green Version]
  32. Kim, M.; Pons-Moll, G.; Pujades, S.; Bang, S.; Kim, J.; Black, M.J.; Lee, S.-H. Data-Driven Physics for Human Soft Tissue Animation. ACM Trans. Graph. 2017, 36, 1–12. [Google Scholar] [CrossRef] [Green Version]
  33. Pons-Moll, G.; Pujades, S.; Hu, S.; Black, M.J. ClothCap: Seamless 4D Clothing Capture and Retargeting. ACM Trans. Graph. 2017, 36, 1–15. [Google Scholar] [CrossRef]
  34. Loper, M.; Mahmood, N.; Romero, J.; Pons-Moll, G.; Black, M.J. SMPL: A Skinned Multi-Person Linear Model. ACM Trans. Graph. 2015, 34, 1–16. [Google Scholar] [CrossRef]
  35. Hesse, N.; Pujades, S.; Romero, J.; Black, M.J.; Bodensteiner, C.; Arens, M.; Hofmann, U.G.; Tacke, U.; Hadders-Algra, M.; Weinberger, R.; et al. Learning an Infant Body Model from RGB-D Data for Accurate Full Body Motion Analysis. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Granada, Spain, 16–20 September 2018. [Google Scholar]
  36. Hesse, N.; Pujades, S.; Black, M.J.; Arens, M.; Hofmann, U.G.; Schroeder, A.S. Learning and Tracking the 3D Body Shape of Freely Moving Infants from RGB-D sequences. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2540–2551. [Google Scholar] [CrossRef] [Green Version]
  37. Pons-Moll, G.; Romero, J.; Mahmood, N.; Black, M.J. Dyna: A Model of Dynamic Human Shape in Motion. ACM Trans. Graph. 2015, 34, 1–14. [Google Scholar] [CrossRef]
  38. Santesteban, I.; Garces, E.; Otaduy, M.A.; Casas, D. SoftSMPL: Data-driven Modeling of Nonlinear Soft-tissue Dynamics for Parametric Humans. Comput. Graph. Forum 2020, 39, 65–75. [Google Scholar] [CrossRef]
  39. Wang, K.; Xie, J.; Zhang, G.; Liu, L.; Yang, J. Sequential 3D Human Pose and Shape Estimation from Point Clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Online, 16–18 June 2020; pp. 7275–7284. [Google Scholar] [CrossRef]
  40. Kingma, D.P.; Welling, M. Auto-Encoding Variational Bayes. In Proceedings of the 2nd International Conference on Learning Representations (ICLR2014), Banff, AB, Canada, 14–16 April 2014. [Google Scholar]
  41. Kostrikov, I.; Jiang, Z.; Panozzo, D.; Zorin, D.; Bruna, J. Surface Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 2540–2548. [Google Scholar] [CrossRef]
  42. Litany, O.; Bronstein, A.; Bronstein, M.; Makadia, A. Deformable Shape Completion with Graph Convolutional Autoencoders. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake, UT, USA, 18–22 June 2018; pp. 1886–1895. [Google Scholar]
  43. Tretschk, E.; Tewari, A.; Zollhöfer, M.; Golyanik, V.; Theobalt, C. DEMEA: Deep Mesh Autoencoders for Non-Rigidly Deforming Objects. European Conference on Computer Vision (ECCV), 23–28 August 2020. Available online: https://arxiv.org/pdf/1905.10290.pdf (accessed on 24 September 2020).
  44. Tan, Q.; Gao, L.; Lai, Y.K.; Xia, S. Variational Autoencoders for Deforming 3D Mesh Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 5841–5850. [Google Scholar]
  45. Jiang, B.; Zhang, J.; Cai, J.; Zheng, J. Learning 3D Human Body Embedding. arXiv. 2019, pp. 1–14. Available online: https://arxiv.org/abs/1905.05622v1 (accessed on 22 March 2020).
  46. Jiang, B.; Zhang, J.; Cai, J.; Zheng, J. Disentangled Human Body Embedding Based on Deep Hierarchical Neural Network. IEEE Trans. Vis. Comput. Graph. 2020, 26, 2560–2575. [Google Scholar] [CrossRef]
  47. Gao, L.; Lai, Y.-K.; Yang, J.; Ling-Xiao, Z.; Xia, S.; Kobbelt, L. Sparse Data Driven Mesh Deformation. IEEE Trans. Vis. Comput. Graph. 2019. [Google Scholar] [CrossRef] [Green Version]
  48. Jiang, H.; Cai, J.; Zheng, J. Skeleton-Aware 3D Human Shape Reconstruction From Point Clouds. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019; pp. 5431–5441. [Google Scholar]
  49. Baran, I.; Popovic, J. Automatic Rigging and Animation of 3D Characters. ACM Trans. Graph. 2007, 26, 72. [Google Scholar] [CrossRef]
  50. Floater, M.S.; Hormann, K. Surface Parameterization: A Tutorial and Survey. In Advances in Multiresolution for Geometric Modelling; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar] [CrossRef]
  51. Praun, E.; Hoppe, H. Spherical parametrization and remeshing. ACM Trans. Graph. 2003, 22, 340–349. [Google Scholar] [CrossRef]
  52. Bronshtein, I.N.; Semendyayev, K.A.; Musiol, G.; Muehlig, H. Handbook of Mathematics; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
Figure 1. Scans from input sequence “jumping jacks”. Every fifth frame of first 100 scans was used to increase readability. Numbers denote frame numbers.
Figure 1. Scans from input sequence “jumping jacks”. Every fifth frame of first 100 scans was used to increase readability. Numbers denote frame numbers.
Applsci 10 07535 g001
Figure 2. Scans from input sequence “punching”. Every fifth frame of first 100 scans was used to increase readability. Numbers denote frame numbers.
Figure 2. Scans from input sequence “punching”. Every fifth frame of first 100 scans was used to increase readability. Numbers denote frame numbers.
Applsci 10 07535 g002
Figure 3. Method overview. From left to right: sequence of 3D scans (gray) and skeleton for first frame; pose tracking step; resulting skeleton for frame (t + 1) and segmentation (multicolor) for frame (t); shape maps computation step; template mesh (blue); template mesh deformation step; and final reconstructed mesh for frame (t).
Figure 3. Method overview. From left to right: sequence of 3D scans (gray) and skeleton for first frame; pose tracking step; resulting skeleton for frame (t + 1) and segmentation (multicolor) for frame (t); shape maps computation step; template mesh (blue); template mesh deformation step; and final reconstructed mesh for frame (t).
Applsci 10 07535 g003
Figure 4. Bone update procedure. From left to right: input scan for frame (t) with skeleton (bottom), scan for frame (t + 1) (upper); extracted segment for bone (B) (green) in frame (t); Iterative Closest Point (ICP) step; segment for bone (B) fitted to frame (t + 1), using ICP; constraints step; obtained transformation change of the bone (B) in frame (t + 1); and resulting skeleton for frame (t + 1), along with scan for frame (t + 1).
Figure 4. Bone update procedure. From left to right: input scan for frame (t) with skeleton (bottom), scan for frame (t + 1) (upper); extracted segment for bone (B) (green) in frame (t); Iterative Closest Point (ICP) step; segment for bone (B) fitted to frame (t + 1), using ICP; constraints step; obtained transformation change of the bone (B) in frame (t + 1); and resulting skeleton for frame (t + 1), along with scan for frame (t + 1).
Applsci 10 07535 g004
Figure 5. The segmentation algorithm.
Figure 5. The segmentation algorithm.
Applsci 10 07535 g005
Figure 6. Cut planes. Red arrows show bone vectors of the currently considered bone, Bn, the previous bone Bn − 1, and next bone Bn + 1. The point (P) cannot be assigned to bone Bn if it lies behind the cutting plane Zn. The plane Zn is defined by the normal vector NZn and the first joint of bone Bn (Jn0).
Figure 6. Cut planes. Red arrows show bone vectors of the currently considered bone, Bn, the previous bone Bn − 1, and next bone Bn + 1. The point (P) cannot be assigned to bone Bn if it lies behind the cutting plane Zn. The plane Zn is defined by the normal vector NZn and the first joint of bone Bn (Jn0).
Applsci 10 07535 g006
Figure 7. Normal vector parallelism condition for point (P) with normal vector (N) and radius vector (R).
Figure 7. Normal vector parallelism condition for point (P) with normal vector (N) and radius vector (R).
Applsci 10 07535 g007
Figure 8. Shape map construction schema. From left to right: segment scan; scan vertices remapped to parametric space (u,v) of shape map; shape map values (color coded) computed from average of points in grid cell; and shape map values (color coded) after mipmapping step to fill missing values.
Figure 8. Shape map construction schema. From left to right: segment scan; scan vertices remapped to parametric space (u,v) of shape map; shape map values (color coded) computed from average of points in grid cell; and shape map values (color coded) after mipmapping step to fill missing values.
Applsci 10 07535 g008
Figure 9. Spherical mapping. Left image shows mapping shape; right image shows corresponding shape map parametrization.
Figure 9. Spherical mapping. Left image shows mapping shape; right image shows corresponding shape map parametrization.
Applsci 10 07535 g009
Figure 10. Sample shape map for left arm segment in frame 7025 of the “jumping jacks” sequence. Left: mesh generated with parameter space sampling; right: shape map directly shown with color-coded values. One can see a blue rectangular area in the right bottom part of the shape map that corresponds to the upper left part of the mesh. No points were present in this area of the arm segment, and, so, the shape map values are zero. The shape map used in this example has a resolution of 100 × 120. The segment was mapped by using capsule mapping.
Figure 10. Sample shape map for left arm segment in frame 7025 of the “jumping jacks” sequence. Left: mesh generated with parameter space sampling; right: shape map directly shown with color-coded values. One can see a blue rectangular area in the right bottom part of the shape map that corresponds to the upper left part of the mesh. No points were present in this area of the arm segment, and, so, the shape map values are zero. The shape map used in this example has a resolution of 100 × 120. The segment was mapped by using capsule mapping.
Applsci 10 07535 g010
Figure 11. From left to right: template mesh segment for torso, template after registration, and probed shape map. The map was sampled with resolution 650 × 600 in parameter space.
Figure 11. From left to right: template mesh segment for torso, template after registration, and probed shape map. The map was sampled with resolution 650 × 600 in parameter space.
Applsci 10 07535 g011
Figure 12. Reconstructions of several frames chosen from the “jumping jacks” sequence by our method.
Figure 12. Reconstructions of several frames chosen from the “jumping jacks” sequence by our method.
Applsci 10 07535 g012
Figure 13. Reconstruction error for “jumping jacks” sequence. X-axis denotes frame number; Y-axis denotes reconstruction error (in mm).
Figure 13. Reconstruction error for “jumping jacks” sequence. X-axis denotes frame number; Y-axis denotes reconstruction error (in mm).
Applsci 10 07535 g013
Figure 14. Reconstruction error for the “punching” sequence. X-axis denotes frame number; Y-axis denotes reconstruction error (in mm).
Figure 14. Reconstruction error for the “punching” sequence. X-axis denotes frame number; Y-axis denotes reconstruction error (in mm).
Applsci 10 07535 g014
Figure 15. Reconstruction error histogram. (a) for frame 7049 of “jumping jacks” sequence; (b) for frame 5807 of “punching” sequence.
Figure 15. Reconstruction error histogram. (a) for frame 7049 of “jumping jacks” sequence; (b) for frame 5807 of “punching” sequence.
Applsci 10 07535 g015
Figure 16. (a) From left to right: SMPL reconstruction, our reconstruction, and source scan; (b) Reconstruction results for template meshes of different resolution. The left mesh has 6295 vertices, whereas the right mesh has 43,489 vertices.
Figure 16. (a) From left to right: SMPL reconstruction, our reconstruction, and source scan; (b) Reconstruction results for template meshes of different resolution. The left mesh has 6295 vertices, whereas the right mesh has 43,489 vertices.
Applsci 10 07535 g016
Figure 17. Influence of change in skinning weights. Left column shows results skinning weights used in our case; right column shows results for weights changed in the knee area. From top to bottom: skinning weights, posed template mesh, and final deformed mesh.
Figure 17. Influence of change in skinning weights. Left column shows results skinning weights used in our case; right column shows results for weights changed in the knee area. From top to bottom: skinning weights, posed template mesh, and final deformed mesh.
Applsci 10 07535 g017
Figure 18. (a) Vertex squeezing in collar-bone area; (b) Reconstruction artifacts on neck as a result of lacking shape map support points.
Figure 18. (a) Vertex squeezing in collar-bone area; (b) Reconstruction artifacts on neck as a result of lacking shape map support points.
Applsci 10 07535 g018
Table 1. Reconstruction accuracies of our proposed method and the Skinned Multi-Person Linear (SMPL) method.
Table 1. Reconstruction accuracies of our proposed method and the Skinned Multi-Person Linear (SMPL) method.
Frame No.Our Average Error SMPL Average Error Our Median ErrorSMPL Median Error
70253.908298.426813.401278.01723
70304.537098.360273.511087.99254
70354.613548.366953.494517.99287
70404.817148.439713.521527.99282
70454.762748.381823.573487.98709
70504.775868.442133.590058.06910
70554.862918.519523.594008.16442
70605.158828.588643.639318.20919
70655.665498.592143.748998.20935
70705.590758.580033.798938.20812
70755.898128.436623.863748.06921
70805.628038.445163.857258.09483
70856.263968.401323.871288.03688
70905.688338.519143.795078.14289
70955.211108.551483.719968.16910
71004.686238.549033.695638.15710
71054.634738.299283.703247.92592
71104.983818.290353.736657.92534
71155.483368.346323.736067.98486
71205.855218.443493.751848.05165
7124 5.78201 8.51798 3.76182 8.11418
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Nowak, M.; Sitnik, R. High-Detail Animation of Human Body Shape and Pose From High-Resolution 4D Scans Using Iterative Closest Point and Shape Maps. Appl. Sci. 2020, 10, 7535. https://doi.org/10.3390/app10217535

AMA Style

Nowak M, Sitnik R. High-Detail Animation of Human Body Shape and Pose From High-Resolution 4D Scans Using Iterative Closest Point and Shape Maps. Applied Sciences. 2020; 10(21):7535. https://doi.org/10.3390/app10217535

Chicago/Turabian Style

Nowak, Marta, and Robert Sitnik. 2020. "High-Detail Animation of Human Body Shape and Pose From High-Resolution 4D Scans Using Iterative Closest Point and Shape Maps" Applied Sciences 10, no. 21: 7535. https://doi.org/10.3390/app10217535

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop