An analysis of svd for deep rotation estimation
Advances in Neural Information Processing Systems, 2020•proceedings.neurips.cc
Symmetric orthogonalization via SVD, and closely related procedures, are well-known
techniques for projecting matrices onto O (n) or SO (n). These tools have long been used for
applications in computer vision, for example optimal 3D alignment problems solved by
orthogonal Procrustes, rotation averaging, or Essential matrix decomposition. Despite its
utility in different settings, SVD orthogonalization as a procedure for producing rotation
matrices is typically overlooked in deep learning models, where the preferences tend toward …
techniques for projecting matrices onto O (n) or SO (n). These tools have long been used for
applications in computer vision, for example optimal 3D alignment problems solved by
orthogonal Procrustes, rotation averaging, or Essential matrix decomposition. Despite its
utility in different settings, SVD orthogonalization as a procedure for producing rotation
matrices is typically overlooked in deep learning models, where the preferences tend toward …
Abstract
Symmetric orthogonalization via SVD, and closely related procedures, are well-known techniques for projecting matrices onto O (n) or SO (n). These tools have long been used for applications in computer vision, for example optimal 3D alignment problems solved by orthogonal Procrustes, rotation averaging, or Essential matrix decomposition. Despite its utility in different settings, SVD orthogonalization as a procedure for producing rotation matrices is typically overlooked in deep learning models, where the preferences tend toward classic representations like unit quaternions, Euler angles, and axis-angle, or more recently-introduced methods. Despite the importance of 3D rotations in computer vision and robotics, a single universally effective representation is still missing. Here, we explore the viability of SVD orthogonalization for 3D rotations in neural networks. We present a theoretical analysis of SVD as used for projection onto the rotation group. Our extensive quantitative analysis shows simply replacing existing representations with the SVD orthogonalization procedure obtains state of the art performance in many deep learning applications covering both supervised and unsupervised training.
proceedings.neurips.cc
Showing the best result for this search. See all results