Convolutional Mesh Regression For Single-Image Human Shape Reconstruction
Convolutional Mesh Regression For Single-Image Human Shape Reconstruction
Convolutional Mesh Regression For Single-Image Human Shape Reconstruction
Abstract
This paper addresses the problem of 3D human pose and
shape estimation from a single image. Previous approaches
consider a parametric model of the human body, SMPL,
and attempt to regress the model parameters that give rise
to a mesh consistent with image evidence. This parameter
regression has been a very challenging task, with model-
based approaches underperforming compared to nonpara-
metric solutions in terms of pose estimation. In our work,
we propose to relax this heavy reliance on the model’s pa-
rameter space. We still retain the topology of the SMPL tem-
plate mesh, but instead of predicting model parameters, we
directly regress the 3D location of the mesh vertices. This is Figure 1: Summary of our approach. Given an input im-
a heavy task for a typical network, but our key insight is that age we directly regress a 3D shape with graph convolutions.
the regression becomes significantly easier using a Graph- Optionally, from the 3D shape output we can regress the
CNN. This architecture allows us to explicitly encode the parametric representation of a body model.
template mesh structure within the network and leverage the
spatial locality the mesh has to offer. Image-based features
are attached to the mesh vertices and the Graph-CNN is re- the excessive ambiguity, the limited training data, and the
sponsible to process them on the mesh structure, while the wide range of imaging conditions make this task particu-
regression target for each vertex is its 3D location. Having larly challenging in the monocular case.
recovered the complete 3D geometry of the mesh, if we still Traditionally, optimization-based approaches [1, 18, 49]
require a specific model parametrization, this can be reli- have offered the most reliable solution for monocular pose
ably regressed from the vertices locations. We demonstrate and shape recovery. However, the slow running time, the
the flexibility and the effectiveness of our proposed graph- reliance on a good initialization and the typical failures
based mesh regression by attaching different types of fea- due to bad local minima have recently shifted the focus
tures on the mesh vertices. In all cases, we outperform the to learning-based approaches [15, 18, 28, 31, 39, 43], that
comparable baselines relying on model parameter regres- regress pose and shape directly from images. The majority
sion, while we also achieve state-of-the-art results among of these works investigate what is the most reliable modal-
model-based pose estimation approaches. 1 ity to regress pose and shape from. Surface landmarks [18],
pose keypoints and silhouettes [31], semantic part segmen-
tation [28], or raw pixels [15] have all been considered as
1. Introduction the network input. And while the input representation topic
has received much debate, all the above approaches nicely
Analyzing humans from images goes beyond estimat-
conform to the SMPL model [21] and use its parametric rep-
ing the 2D pose for one person [27, 47] or multiple peo-
resentation as the regression target of choice. However, tak-
ple [2, 32], or even estimating a simplistic 3D skele-
ing the decision to commit to a particular parametric space
ton [24, 25]. Our understanding relies heavily on being able
can be quite constraining itself. For example, SMPL is not
to properly reconstruct the complete 3D pose and shape of
modeling hand pose or facial expressions [14, 36]. What
people from monocular images. And while this problem
is even more alarming is that the model parameter space
is well addressed in settings with multiple cameras [8, 14],
might not be appropriate as a regression target. In the case
1 Project Page: seas.upenn.edu/˜nkolot/projects/cmr of SMPL, the pose space is expressed in the form of 3D
1
Attach image features Per-vertex feature vectors Regressed 3D vertex
in the graph coordinates
Encoder
...
Template Mesh Input: Image features ⊕ Graph CNN Output: 3D vertex coordinates Output Mesh
template 3D coordinates
Figure 2: Overview of proposed framework. Given an input image, an image-based CNN encodes it in a low dimensional
feature vector. This feature vector is embedded in the graph defined by the template human mesh by attaching it to the 3D
coordinates (xti , yit , zit ) of every vertex i. We then process it through a series of Graph Convolutional layers and regress the
3D vertex coordinates (x̂i , ŷi , ẑi ) of the deformed mesh.
rotations, a pretty challenging prediction target [23, 26]. timating the 3D position for each vertex, if we need our
Depending on the selected 3D rotation representation (e.g., prediction to conform to a specific model, we can regress
axis angle, rotation matrices, quaternions), we might face its parameters quite reliably from the mesh geometry (Fig-
problems of periodicity, non-minimal representation, or dis- ure 1). This enables a more hybrid usage for our approach,
continuities, which complicate the prediction task. And in making it directly comparable to model-based approaches.
fact, all the above model-based approaches underperfom in Furthermore, our graph-based processing is largely agnos-
pose estimation metrics compared to approaches regressing tic to the input type, allowing us to attach features extracted
a less informative, yet more accurate, 3D skeleton through from RGB pixels [15], semantic part segmentation [28], or
3D joint regression [3, 24, 29, 38]. even from dense correspondences [6]. In all these cases
we demonstrate that our approach outperforms the baselines
In this work, we propose to take a more hybrid route to- that regress model parameters directly from the same type
wards pose and shape regression. Even though we preserve of features, while overall we achieve state-of-the-art pose
the template mesh introduced by SMPL, we do not directly estimation results among model-based baselines.
regress the SMPL model parameters. Instead, our regres- Our contributions can be summarized as follows:
sion target is the 3D mesh vertices. Considering the exces-
sive number of vertices of the mesh, if addressed naively, • We reformulate the problem of human pose and shape
this would be a particular heavy burden for the network. estimation in the form of regressing the 3D locations
Our key insight though, is that this task can be effectively of the mesh vertices, to avoid the difficulties of direct
and efficiently addressed by the introduction of a Graph- model parameter regression.
CNN. This architecture enables the explicit encoding of the
mesh structure in the network, and leverages the spatial lo- • We propose a Graph CNN for this task which encodes
cality of the graph. Given a single image (Figure 2), any the mesh structure and enables the convolutional mesh
typical CNN can be used for feature extraction. The ex- regression of the 3D vertex locations.
tracted features are attached on the vertex coordinates of
the template mesh, and the processing continues on the • We demonstrate the flexibility of our framework by
graph structure defined for the Graph-CNN. In the end, considering different input representations, always
each vertex has as target its 3D location in the deformed outperforming the baselines regressing the model pa-
mesh. This allows us to recover the complete 3D geome- rameters directly.
try of the human body without explicitly committing to a
pre-specified parametric space, leaving the mesh topology • We achieve state-of-the-art results among model-based
as the only hand-designed choice. Conveniently, after es- pose estimation approaches.
2. Related work an orthogonal approach to them, by regressing the 3D loca-
tion of the mesh vertices by means of a Graph-CNN. Our
There is rich recent literature on 3D pose estimation in approach is transparent to the type of the input represen-
the form of a simplistic body skeleton, e.g., [3, 19, 22, 24, tation we use, since the flexibility of the Graph network
25, 29, 30, 34, 35, 38, 40, 41, 42, 50, 51]. However, in this allows us to consider different types of input representa-
Section, we focus on the more relevant works recovering tions employed in prior work, like semantic part-based fea-
the full shape and pose of the human body. tures [28], features extracted directly from raw pixels [15],
Optimization-based shape recovery: Going beyond a or even dense correspondences [6].
simplistic skeleton, and recovering the full pose and
shape, initially, the most successful approaches followed Nonparametric shape estimation: Recently, nonparamet-
optimization-based solutions. The work of Guan et al. [5] ric approaches have also been proposed for pose and shape
relied on annotated 2D landmarks and optimized for the pa- estimation. Varol et al. [44] use a volumetric reconstruction
rameters of the SCAPE parametric model that generated approach with a voxel output. Different tasks are simulta-
a mesh optimally matching this evidence. This procedure neously considered for intermediate supervision. Jackson et
was made automatic with the SMPLify approach of Bogo et al. [12] also propose a form of volumetric reconstruction by
al. [1], where the 2D keypoints where localized through the extending their recent face reconstruction network [11] to
help of a CNN [32]. Lassner et al. [18] included auxiliary work for full body images. The main drawback of these ap-
landmarks on the surface of the human body, and addition- proaches adopting a completely nonparametric route, is that
ally considered the estimated silhouette during the fitting even if they recover an accurate voxelized sculpture of the
process. Zanfir et al. [49] similarly optimized for consis- human body, there is none or very little semantic informa-
tency of the reprojected mesh with semantic parts of the hu- tion captured. In fact, to recover the body pose, we need to
man body, while extending the approach to work for multi- explicitly perform an expensive body model fitting step us-
ple people as well. Despite the reliable results obtained, the ing the recovered voxel map, as done in [44]. In contrast to
main concern for approaches of this type is that they pose a them, we retain the SMPL mesh topology, which allows us
complicated non-convex optimization problem. This means to get dense semantic correspondences of our 3D prediction
that the final solution is very sensitive to the initialization, with the image, and in the end we can also easily regress the
the optimization can get stuck in local minima, and simul- model’s parameters given the vertices 3D location.
taneously the whole procedure can take several minutes to Graph CNNs: Wang et al. [46] use a Graph CNN to re-
complete. These drawbacks have motivated the increased construct meshes of objects from images by deforming an
interest in learning-based approaches, like ours, where the initial ellipsoid. However, mesh reconstruction of arbitrary
pose and shape are regressed directly from images. objects is still an open problem, because shapes of objects
Direct parametric regression: When it comes to pose even in the same class, e.g., chairs, do not have the same
and shape regression, the vast majority of works adopt the genus. Contrary to generic objects, arbitrary human shapes
SMPL parametric model and consider regression of pose can be reconstructed as continuous deformations of a tem-
and shape parameters. Lassner et al. [18] detect 91 land- plate model. In fact, recently there has been a lot of research
marks on the body surface and use a random forest to in applying Graph Convolutions for human shape applica-
regress the SMPL model parameters for pose and shape. tions. Verma et al. [45] propose a new data-driven Graph
Pavlakos et al. [31] rely on a smaller number of keypoints Convolution operator with applications on shape analysis.
and body silhouettes to regress the SMPL parameters. Om- Litany et al. [20] use a Graph VAE to learn a latent space
ran et al. [28] follow a similar strategy but use a part seg- of human shapes, that is useful for shape completion. Ran-
mentation map as the intermediate representation. On the jan et al. [33] use a mesh autoencoder network to recover
other hand, Kanazawa et al. [15] attempt to regress the a latent representation of 3D human faces from a series of
SMPL parameters directly from images, using a weakly meshes. The main difference of our approach is that we do
supervised approach relying on 2D keypoint reprojection not aim to learn a generative shape model from 3D shapes,
and a pose prior learnt in an adversarial manner. Tung et but instead perform single-image shape reconstruction; the
al. [43] present a self-supervised approach for the same input to our network is an image, not a 3D shape. The use
problem, while Tan et al. [39] rely on weaker supervision of a Graph CNN alone is not new, but we consider as a
in the form of body silhouettes. The common theme of all contribution the insight that Graph CNNs provide a very
these works is that they have focused on using the SMPL pa- natural structure to enable our hybrid approach. They as-
rameter space as a regression target. However, the 3D rota- sist us in avoiding the SMPL parameter space, which has
tions involved as the pose parameters have created issues in been reported to have issues with regression [24, 31], while
the regression (e.g., discontinuities or periodicity) and typ- simultaneously allowing the explicit encoding of the graph
ically underperform in terms of pose estimation compared structure in the network, so that we can leverage spatial lo-
to skeleton-only baselines. In this work, we propose to take cality and preserve the semantic correspondences.
3. Technical approach The neighborhood averaging is essential for producing a
high quality shape because it enforces neighboring ver-
In this Section we present our proposed approach for pre- tices to have similar features, and thus the output shape is
dicting 3D human shape from a single image. First, in Sub- smooth. With this design choice we observed that there is
section 3.1 we briefly describe the image-based architecture
no need of a smoothness loss on the shape, as for exam-
that we use as a generic feature extractor. In Subsection 3.2
ple in [16]. We also experimented with the more powerful
we focus on the core of our approach, the Graph CNN ar-
graph convolutions proposed in [45] but we did not observe
chitecture that is responsible to regress the 3D vertex coor- quantitative improvement in the results, so we decided to
dinates of the mesh that deforms to reconstruct the human keep our original and simpler design choice.
body. Then, Subsection 3.3 describes a way to combine
For the graph convolution layers, we make use of resid-
our non-parametric regression with the prediction of SMPL
ual connections as they help in speeding up significantly the
model parameters. Finally, Subsection 3.4 focuses on im-
training and also lead in higher quality output shapes. Our
portant implementation details.
basic building block is similar to the Bottleneck residual
3.1. Image-based CNN block [7] where 1 × 1 convolutions are replaced by per-
vertex fully connected layers and Batch Normalization [9]
The first part of our pipeline consists of a typical image-
is replaced by Group Normalization [48]. We noticed that
based CNN following the ResNet-50 architecture [7]. From
Batch Normalization leads to unstable training and poor test
the original design we ignore the final fully connected layer,
performance, whereas with no normalization the training is
keeping only the 2048-D feature vector after the average
very slow and the network can get stuck at local minima and
pooling layer. This CNN is used as a generic feature ex-
collapse early during training.
tractor from the input representation. To demonstrate the
flexibility of our approach, we experiment with a variety Besides the 3D coordinates for each vertex, our Graph
of inputs, i.e., RGB images, part segmentation and Dense- CNN also regresses the camera parameters for a weak-
Pose input [6]. For RGB images we simply use raw pixels perspective camera model. Following Kanazawa et al. [15],
as input, while for the other representations, we assume that we predict a scaling factor s and a 2D translation vector t.
another network [6], provides us with the predicted part seg- Since the prediction of the network is already on the cam-
mentation or DensePose. Although we present experiments era frame, we do not need to regress an additional global
with a variety of inputs, our goal is not to investigate the camera rotation. The camera parameters are regressed from
effect of the input representation, but rather we focus our the graph embedding and not from the image features di-
attention on the graph-based processing that follows. rectly. This way we get a much more reliable estimate that
is consistent with the output shape.
3.2. Graph CNN Regarding training, let Ŷ ∈ RN ×3 be the predicted 3D
At the heart of our approach, we propose to employ a shape, Y the ground truth shape and X the ground truth
Graph CNN to regress the 3D coordinates of the mesh ver- 2D keypoint locations of the joints. From our 3D shape
tices. For our network architecture we draw inspiration we can also regress the location for the predicted 3D joints
from the work of Litany et al. [20]. We start from a tem- Jˆ3D employing the same regressor that the SMPL model
plate human mesh with N vertices as depicted in Figure 2. is using to recover joints from vertices. Given these 3D
Given the 2048-D feature vector extracted by the generic joints, we can simply project them on the image plane, X̂ =
image-based network, we attach these features to the 3D sΠ(Jˆ3D ) + t. Now, we train the network using two forms
coordinates of each vertex in the template mesh. From a of supervision. First, we apply a per-vertex L1 loss between
high-level perspective, the Graph CNN uses as input the 3D the predicted and ground truth shape, i.e.,
coordinates of each vertex along with the input features and
N
X
has the goal of estimating the 3D coordinates for each vertex
in the output, deformed mesh. This processing is performed Lshape = ||Ŷi − Yi ||1 . (2)
i=1
by a series of Graph Convolution layers.
For the graph convolutions we use the formulation from
Empirically we found that using L1 loss leads to more stable
Kipf et al. [17] which is defined as:
training and better performance than L2 loss. Additionally,
Y = ÃXW (1) to enforce image-model alignment, we also apply an L1 loss
between the projected joint locations and the ground truth
where X ∈ R N ×k
is the input feature vector, W ∈
keypoints, i.e.,
Rk×` the weight matrix and and à ∈ RN ×N is the row-
normalized adjacency matrix of the graph. Essentially, this M
X
is equivalent to performing per-vertex fully connected op- LJ = ||X̂i − Xi ||1 . (3)
erations followed by a neighborhood averaging operation. i=1
3D rotations in the axis-angle representation. To avoid this,
we followed the strategy employed by Omran et al. [28].
θ More specifically, we convert the parameters from axis-
angle representation to a rotation matrix representation us-
MLP SMPL ing the Rodrigues formula, and we set the output of our
network to regress the elements of the rotation matrices. To
ensure that the output is a valid rotation matrix we project it
β
to the manifold of rotation matrices using the differentiable
SVD operation. Although this representation does not ex-
Regressed shape Parametric shape plicitly improve our quantitative results, we observed faster
convergence during training, so we selected it as a more
Figure 3: Predicting SMPL parameters from regressed practical option.
shape. Given a regressed 3D shape from the network of 3.4. Implementation details
Figure 2, we can use a Multi-Layer Perceptron (MLP) to
regress the SMPL parameters and produce a shape that is An important detail regarding our Graph CNN is that we
consistent with the original non-parametric shape do not operate directly on the original SMPL mesh, but we
first subsample it by a factor of 4 and then upsample it again
Finally, our complete training objective is: to the original scale using the technique described in [33].
This is essentially performed by precomputing downsam-
L = Lshape + LJ . (4) pling and upsampling matrices D and U and left-multiply
them with the graph every time we need to do resampling.
This form of supervised training requires us to have ac- This downsampling step helps to avoid the high redundancy
cess to images with full 3D ground truth shape. However, in the original mesh due to the spatial locality of the ver-
based on our empirical observation, it is not necessary for tices, and decrease memory requirements during training.
all the training examples to come with ground truth shape. Regarding the training of the MLP, we employ a 2-step
In fact, following the observation of Omran et al. [28], we training procedure. First we train the network that regresses
can leverage additional images that provide only 2D key- the non-parametric shape and then with this network fixed
point ground truth. In these cases, we simply ignore the we train the MLP that predicts the SMPL parameters. We
first term of the previous equation and train only with the also experimented with training them end-to-end but we ob-
keypoint loss. We have included evaluation under this set- served a decrease in the performance of the network for both
ting of weaker supervision in the Sup. Mat. the parametric and non-parametric shape.
3.3. SMPL from regressed shape
4. Empirical evaluation
Although we demonstrate that non-parametric regression
is an easier task for the network, there are still many ap- In this Section, we present the empirical evaluation of
plications where a parametric representation of the human our approach. First, we discuss the datasets we use in our
body can be very useful (e.g., motion prediction). In this evaluation (Subsection 4.1), then we provide training details
Subsection, we present a straightforward way to combine for our pipeline (Subsection 4.2), and finally, the quantita-
our non-parametric prediction with a particular parametric tive and qualitative evaluation (Subsection 4.3) follows.
model, i.e., SMPL. To achieve this goal, we train another 4.1. Datasets
network that regresses pose (θ) and shape (β) parameters of
the SMPL parametric model given the regressed 3D shape We employ two datasets that provide 3D ground truth for
as input. The architecture of this network can be very sim- training, Human3.6M [10] and UP-3D [18], while we eval-
ple, i.e., a Multi-Layer Perceptron (MLP) [37] for our im- uate our approach on Human3.6M and the LSP dataset [13].
plementation. This network is presented in Figure 3 and the Human3.6M: It is an indoor 3D pose dataset including sub-
loss function for training is: jects performing activities like Walking, Eating and Smok-
ing. We use the subjects S1, S5, S6, S7 and S8 for training,
L = Lshape + LJ + Lθ + λLβ . (5) and keep the subjects S9 and S11 for testing. We present
results for two popular protocols (P1 and P2, as defined
Here, Lshape and LJ are the losses on the 3D shape and 2D in [15]) and two error metrics (MPJPE and Reconstruction
joint reprojection as before, while Lθ and Lβ are L2 losses error, as defined in [51]).
on the SMPL pose and shape parameters respectively. UP-3D: It is a dataset created by applying SMPLify [1] on
As observed by previous works, e.g., [31, 24], it is chal- natural images of humans and selecting the successful fits.
lenging to regress the pose parameters θ, which represent We use the training set of this dataset for training.
Method MPJPE Reconst. Error
SMPL Parameter Regression [15] - 77.6
Mesh Regression (FC) 200.8 105.8
Mesh Regression (Graph) 102.1 69.0
Mesh Regression (Graph + SMPL) 113.2 61.3
LSP: It is a 2D pose dataset, including also segmentation Figure 4: Using a series of fully connected (FC) layers to
annotations provided by Lassner et al. [18]. We use the test regress the vertex 3D coordinates severely complicates the
set of this dataset for evaluation. regression task and gives non-smooth meshes, since the net-
work cannot leverage directly the topology of the graph.
4.2. Training details
MPJPE Reconst. Error
Input Regression Type
For the image-based encoder, we use a ResNet50 P1 P2 P1 P2
model [7] pretrained on ImageNet [4]. All other network Parameter [15] 88.0 - 58.1 56.8
RGB
components (Graph CNN and MLP for SMPL parameters) Mesh (Graph + SMPL) 74.7 71.9 51.9 50.1
are trained from scratch. For our training, we use the Adam Parameter [28] - - - 59.9
Parts
optimizer, and a batch size of 16, with the learning rate Mesh (Graph + SMPL) 80.4 77.4 56.1 53.3
set to 3e – 4. We did not use learning rate decay. Train- Parameter [15] 82.7 79.5 57.8 54.9
DP[6]
ing with data only from Human3.6M lasts for 10 epochs, Mesh (Graph + SMPL) 78.9 74.2 55.3 51.0
while mixed training with data from Human3.6M and UP-
3D requires training for 25 epochs, because of the greater Table 2: Comparison of direct SMPL parameter regression
image diversity. To train the MLP that regresses SMPL pa- versus our proposed mesh regression on Human3.6M (Pro-
rameters from our predicted shape, we use 3D shapes from tocol 1 and 2) for different input representations. The num-
Human3.6M and UP-3D. Finally, for the models using Part bers are mean 3D joint errors in mm, with and without Pro-
Segmentation or DensePose [6] predictions as input, we use crustes alignment (Rec. Error and MPJPE respectively).
the pretrained network of [6] to provide the corresponding Our results are computed after regressing SMPL parameters
predictions. from our non-parametric shape. Number are taken from the
respective works, except for the baseline of [15] on Dense-
4.3. Experimental analysis Pose images, which is evaluated by us.
Regression target: For the initial ablative study, we
aim to investigate the importance of our mesh regression a more flexible non-parametric regression target, instead of
for 3D human shape estimation. To this end, we focus regressing the model parameters in one shot.
on the Human3.6M dataset and we evaluate the regressed Beyond the regression target, one of our contributions
shape through 3D pose accuracy. First, we evaluate the di- is also the insight that the task of regressing 3D vertex co-
rect regression of the 3D vertex coordinates, in compari- ordinates can be greatly simplified when a Graph CNN is
son to generating the 3D shape implicitly through regres- used for the prediction. To investigate this design choice,
sion of the SMPL model parameters directly from images. we compare it with a naive alternative that regresses ver-
The most relevant baseline in this category is the HMR tex coordinates with a series of fully connected layers on
method of [15]. In Table 1, we present the comparison of top of our image-based encoder (Mesh Regression - (FC)).
this approach (SMPL parameter regression) with our non- This design clearly underperforms compared to our Graph-
parametric shape regression (Mesh Regression - (Graph)). based architecture, demonstrating the importance of lever-
For a more fair comparison, we also include our results aging the mesh structure through the Graph CNN during
for the MLP that regresses SMPL parameters using our the regression. The benefit of graph-based processing is
non-parametric mesh as input (Mesh Regression - (Graph demonstrated also qualitatively in Figure 4.
+ SMPL)). In both cases, we outperform the strong base- Input representation: For the next ablative, we demon-
line of [15], which demonstrates the benefit of estimating strate the effectiveness of our mesh regression for different
MPJPE Reconst. Error
Input Output shape
P1 P2 P1 P2
Non parametric 75.0 72.7 51.2 49.3
RGB
Parametric 74.7 71.9 51.9 50.1
Non parametric 78.0 73.4 54.6 50.6
Parts
Parametric 80.4 77.4 56.1 53.3
Non parametric 78.0 72.3 55.3 50.3
DP[6]
Parametric 78.9 74.2 55.3 51.0
Figure 6: Successful reconstructions of our approach. Rows 1-3: LSP [13]. Rows 4-5: Human3.6M [10]. With light pink
color we indicate the regressed non parametric shape and with light blue the SMPL model regressed from the previous shape.
itative examples of our approach, including both the non- pose a Graph-CNN architecture, which explicitly encodes
parametric mesh and the corresponding SMPL mesh re- the mesh structure and processes image features attached to
gressed using our shape as input. Typical failures can be its vertices. Our convolutional mesh regression outperforms
attributed to challenging poses, severe self-occlusions, as the relevant baselines that regress model parameters directly
well as interactions among multiple people. for a variety of input representations, while ultimately, it
Runtime: On a 2080 Ti GPU, network inference for a achieves state-of-the-art results among model-based pose
single image lasts 33ms, which is effectively real-time. estimation approaches. Future work can focus on current
limitations (e.g., low resolution of output mesh, missing de-
tails in the recovered shape), as well as opportunities that
5. Summary
this non-parametric representation provides (e.g., capture
The goal of this paper was to address the problem of pose aspects missing in many human body models, like hand ar-
and shape estimation by attempting to relax the heavy re- ticulation, facial expressions, clothing and hair).
liance of previous works on a parametric model, typically
SMPL [21]. While we retain the SMPL mesh topology, in- Acknowledgements: We gratefully appreciate support through the fol-
lowing grants: NSF-IIP-1439681 (I/UCRC), NSF-IIS-1703319, NSF MRI
stead of directly predicting the model parameters for a given 1626008, ARL RCTA W911NF-10-2-0016, ONR N00014-17-1-2093,
image, our target is to first estimate the locations of the 3D ARL DCIST CRA W911NF-17-2-0181, the DARPA-SRC C-BRIC, and
mesh vertices. For this to be achieved effectively, we pro- by Honda Research Institute.
References [18] Christoph Lassner, Javier Romero, Martin Kiefel, Federica
Bogo, Michael J Black, and Peter V Gehler. Unite the peo-
[1] Federica Bogo, Angjoo Kanazawa, Christoph Lassner, Peter ple: Closing the loop between 3D and 2D human representa-
Gehler, Javier Romero, and Michael J Black. Keep it SMPL: tions. In CVPR, 2017. 1, 3, 5, 6, 7
Automatic estimation of 3D human pose and shape from a
[19] Sijin Li and Antoni B Chan. 3D human pose estimation from
single image. In ECCV, 2016. 1, 3, 5, 7
monocular images with deep convolutional neural network.
[2] Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. Re-
In ACCV, 2014. 3
altime multi-person 2D pose estimation using part affinity
[20] Or Litany, Alex Bronstein, Michael Bronstein, and Ameesh
fields. In CVPR, 2017. 1
Makadia. Deformable shape completion with graph convo-
[3] Rishabh Dabral, Anurag Mundhada, Uday Kusupati, Safeer
lutional autoencoders. In CVPR, 2018. 3, 4
Afaque, Abhishek Sharma, and Arjun Jain. Learning 3d hu-
man pose from structure and motion. In Proceedings of the [21] Matthew Loper, Naureen Mahmood, Javier Romero, Gerard
European Conference on Computer Vision (ECCV), pages Pons-Moll, and Michael J Black. SMPL: A skinned multi-
668–683, 2018. 2, 3 person linear model. ACM Transactions on Graphics (TOG),
[4] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, 34(6):248, 2015. 1, 8
and Li Fei-Fei. Imagenet: A large-scale hierarchical image [22] Diogo C Luvizon, David Picard, and Hedi Tabia. 2D/3D
database. In CVPR, 2009. 6 pose estimation and action recognition using multitask deep
[5] Peng Guan, Alexander Weiss, Alexandru O Balan, and learning. In CVPR, 2018. 3
Michael J Black. Estimating human shape and pose from [23] Siddharth Mahendran, Haider Ali, and Rene Vidal. A mixed
a single image. In ICCV, 2009. 3 classification-regression framework for 3D pose estimation
[6] Rıza Alp Güler, Natalia Neverova, and Iasonas Kokkinos. from 2D images. In BMVC, 2018. 2
Densepose: Dense human pose estimation in the wild. In [24] Julieta Martinez, Rayat Hossain, Javier Romero, and James J
CVPR, 2018. 2, 3, 4, 6, 7 Little. A simple yet effective baseline for 3D human pose
[7] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. estimation. In ICCV, 2017. 1, 2, 3, 5
Deep residual learning for image recognition. In CVPR, [25] Dushyant Mehta, Srinath Sridhar, Oleksandr Sotnychenko,
2016. 4, 6 Helge Rhodin, Mohammad Shafiei, Hans-Peter Seidel,
[8] Yinghao Huang, Federica Bogo, Christoph Classner, Angjoo Weipeng Xu, Dan Casas, and Christian Theobalt. Vnect:
Kanazawa, Peter V Gehler, Ijaz Akhter, and Michael J Black. Real-time 3D human pose estimation with a single rgb cam-
Towards accurate marker-less human shape and pose estima- era. ACM Transactions on Graphics (TOG), 36(4):44, 2017.
tion over time. In 3DV, 2017. 1 1, 3
[9] Sergey Ioffe and Christian Szegedy. Batch normalization: [26] Arsalan Mousavian, Dragomir Anguelov, John Flynn, and
Accelerating deep network training by reducing internal co- Jana Košecká. 3D bounding box estimation using deep learn-
variate shift. In ICML, 2015. 4 ing and geometry. In CVPR, 2017. 2
[10] Catalin Ionescu, Dragos Papava, Vlad Olaru, and Cristian [27] Alejandro Newell, Kaiyu Yang, and Jia Deng. Stacked hour-
Sminchisescu. Human3.6M: Large scale datasets and predic- glass networks for human pose estimation. In ECCV, 2016.
tive methods for 3D human sensing in natural environments. 1
PAMI, 36(7):1325–1339, 2014. 5, 8 [28] Mohamed Omran, Christoph Lassner, Gerard Pons-Moll, Pe-
[11] Aaron S Jackson, Adrian Bulat, Vasileios Argyriou, and ter Gehler, and Bernt Schiele. Neural body fitting: Unifying
Georgios Tzimiropoulos. Large pose 3D face reconstruction deep learning and model based human pose and shape esti-
from a single image via direct volumetric CNN regression. mation. In 3DV, 2018. 1, 2, 3, 5, 6, 7
In ICCV, 2017. 3 [29] Georgios Pavlakos, Xiaowei Zhou, and Kostas Daniilidis.
[12] Aaron S Jackson, Chris Manafas, and Georgios Tzimiropou- Ordinal depth supervision for 3D human pose estimation. In
los. 3D human body reconstruction from a single image via CVPR, 2018. 2, 3
volumetric regression. In ECCVW, 2018. 3 [30] Georgios Pavlakos, Xiaowei Zhou, Konstantinos G Derpa-
[13] Sam Johnson and Mark Everingham. Clustered pose and nis, and Kostas Daniilidis. Coarse-to-fine volumetric predic-
nonlinear appearance models for human pose estimation. In tion for single-image 3D human pose. In CVPR, 2017. 3
BMVC, 2010. 5, 8
[31] Georgios Pavlakos, Luyang Zhu, Xiaowei Zhou, and Kostas
[14] Hanbyul Joo, Tomas Simon, and Yaser Sheikh. Total cap-
Daniilidis. Learning to estimate 3D human pose and shape
ture: A 3D deformation model for tracking faces, hands, and
from a single color image. In CVPR, 2018. 1, 3, 5, 7
bodies. In CVPR, 2018. 1
[32] Leonid Pishchulin, Eldar Insafutdinov, Siyu Tang, Bjoern
[15] Angjoo Kanazawa, Michael J Black, David W Jacobs, and
Andres, Mykhaylo Andriluka, Peter V Gehler, and Bernt
Jitendra Malik. End-to-end recovery of human shape and
Schiele. DeepCut: Joint subset partition and labeling for
pose. In CVPR, 2018. 1, 2, 3, 4, 5, 6, 7
multi person pose estimation. In CVPR, 2016. 1, 3
[16] Hiroharu Kato, Yoshitaka Ushiku, and Tatsuya Harada. Neu-
ral 3D mesh renderer. In CVPR, 2018. 4 [33] Anurag Ranjan, Timo Bolkart, Soubhik Sanyal, and
[17] Thomas N Kipf and Max Welling. Semi-supervised classi- Michael J Black. Generating 3D faces using convolutional
fication with graph convolutional networks. In ICLR, 2017. mesh autoencoders. In ECCV, 2018. 3, 5
4
[34] Grégory Rogez and Cordelia Schmid. Mocap-guided data [43] Hsiao-Yu Tung, Hsiao-Wei Tung, Ersin Yumer, and Katerina
augmentation for 3D pose estimation in the wild. In NIPS, Fragkiadaki. Self-supervised learning of motion capture. In
2016. 3 NIPS, 2017. 1, 3
[35] Gregory Rogez, Philippe Weinzaepfel, and Cordelia Schmid. [44] Gül Varol, Duygu Ceylan, Bryan Russell, Jimei Yang, Ersin
LCR-Net: Localization-classification-regression for human Yumer, Ivan Laptev, and Cordelia Schmid. Bodynet: Volu-
pose. In CVPR, 2017. 3 metric inference of 3D human body shapes. In ECCV, 2018.
[36] Javier Romero, Dimitrios Tzionas, and Michael J Black. Em- 3, 7
bodied hands: Modeling and capturing hands and bodies to- [45] Nitika Verma, Edmond Boyer, and Jakob Verbeek. FeaStNet:
gether. ACM Transactions on Graphics (TOG), 36(6):245, Feature-steered graph convolutions for 3D shape analysis. In
2017. 1 CVPR, 2018. 3, 4
[37] David E Rumelhart, Geoffrey E Hinton, and Ronald J [46] Nanyang Wang, Yinda Zhang, Zhuwen Li, Yanwei Fu, Wei
Williams. Learning internal representations by error propa- Liu, and Yu-Gang Jiang. Pixel2Mesh: Generating 3D mesh
gation. Technical report, California Univ San Diego La Jolla models from single rgb images. In ECCV, 2018. 3
Inst for Cognitive Science, 1985. 5 [47] Shih-En Wei, Varun Ramakrishna, Takeo Kanade, and Yaser
[38] Xiao Sun, Bin Xiao, Shuang Liang, and Yichen Wei. Integral Sheikh. Convolutional pose machines. In CVPR, 2016. 1
human pose regression. In ECCV, 2018. 2, 3 [48] Yuxin Wu and Kaiming He. Group normalization. In ECCV,
[39] J Tan, Ignas Budvytis, and Roberto Cipolla. Indirect deep 2018. 4
structured learning for 3D human body shape and pose pre- [49] Andrei Zanfir, Elisabeta Marinoiu, and Cristian Sminchis-
diction. In BMVC, 2017. 1, 3 escu. Monocular 3D pose and shape estimation of multiple
[40] Bugra Tekin, Isinsu Katircioglu, Mathieu Salzmann, Vincent people in natural scenes–the importance of multiple scene
Lepetit, and Pascal Fua. Structured prediction of 3D human constraints. In CVPR, 2018. 1, 3
pose with deep neural networks. In BMVC, 2016. 3 [50] Xingyi Zhou, Qixing Huang, Xiao Sun, Xiangyang Xue, and
[41] Bugra Tekin, Pablo Marquez Neila, Mathieu Salzmann, and Yichen Wei. Towards 3D human pose estimation in the wild:
Pascal Fua. Learning to fuse 2D and 3D image cues for a weakly-supervised approach. In ICCV, 2017. 3
monocular body pose estimation. In ICCV, 2017. 3 [51] Xiaowei Zhou, Menglong Zhu, Georgios Pavlakos, Spyridon
[42] Denis Tome, Christopher Russell, and Lourdes Agapito. Leonardos, Konstantinos G Derpanis, and Kostas Daniilidis.
Lifting from the deep: Convolutional 3D pose estimation Monocap: Monocular human motion capture using a CNN
from a single image. In CVPR, 2017. 3 coupled with a geometric prior. PAMI, 41(4):901–914, 2019.
3, 5