Geometry-Free View Synthesis: Transformers and No 3D Priors

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Geometry-Free View Synthesis: Transformers and no 3D Priors

Robin Rombach* Patrick Esser* Björn Ommer


Ludwig Maximilian University of Munich & IWR, Heidelberg University, Germany
*Both authors contributed equally to this work. Code is available at https://git.io/JRPPs.

Figure 1. We present a probabilistic approach to Novel View Synthesis based on transformers, which does not require explicit 3D priors.
Given a single source frame and a camera transformation (center), we synthesize plausible novel views that exhibit high fidelity (right).
For comparison, SynSin [75] (left) yields uniform surfaces and unrealistic warps for large camera transformations.

Abstract and the ability to predict what the world will look like before
we move is critical to orient ourselves in a world with three
Is a geometric model required to synthesize novel views spatial dimensions. Therefore, we address the problem of
from a single image? Being bound to local convolutions, novel view synthesis (NVS) [37, 25, 10] based on a single
CNNs need explicit 3D biases to model geometric transfor- initial image and a desired change in viewpoint. In partic-
mations. In contrast, we demonstrate that a transformer- ular, we aim at specifically modeling large camera trans-
based model can synthesize entirely novel views without any formations, e.g. rotating the camera by 90◦ and looking at
hand-engineered 3D biases. This is achieved by (i) a global previously unseen scenery. As this is an underdetermined
attention mechanism for implicitly learning long-range 3D problem, we present a probabilistic generative model that
correspondences between source and target views, and (ii) learns the distribution of possible target images and synthe-
a probabilistic formulation necessary to capture the ambi- sizes them at high fidelity. Solving this task has the potential
guity inherent in predicting novel views from a single image, to transform the passive experience of viewing images into
thereby overcoming the limitations of previous approaches an interactive, 3D exploration of the depicted scene. This
that are restricted to relatively small viewpoint changes. We requires an approach that both understands the geometry of
evaluate various ways to integrate 3D priors into a trans- the scene and, when rendering novel views of an input, con-
former architecture. However, our experiments show that siders their semantic relationships to the visible content.
no such geometric priors are required and that the trans- Interpolation vs. Extrapolation Recently, impressive syn-
former is capable of implicitly learning 3D relationships thesis results have been obtained with geometry-focused ap-
between images. Furthermore, this approach outperforms proaches in the multi-view setting [56, 57, 46], where not
the state of the art in terms of visual quality while covering just a single but a large number of images or a video of a
the full distribution of possible realizations. scene are available such that the task is closer to a view in-
terpolation than a synthesis of genuinely novel views. In
1. Introduction contrast, if only a single image is available, the synthesis
Imagine looking through an open doorway. Most of the of novel views is always an extrapolation task. Solving this
room on the other side is invisible. Nevertheless, we can es- task is appealing because it allows a 3D exploration of a
timate how the room likely looks. The few visible features scene starting from only a single picture.
enable an informed guess about the height of the ceiling, the While existing approaches for single-view synthesis make
position of walls and lighting etc. Given this limited infor- small camera transformations, such as a rotation by a few
mation, we can then imagine several plausible realizations degrees, possible, we aim at expanding the possible camera
of the room on the other side. This 3D geometric reasoning changes to include large transformations. The latter neces-

1
sitates a probabilistic framework: Especially when apply- number of structured input views. [25] describes a similar
ing large transformation, the problem is underdetermined approach but shows that unstructured input views suffice if
because there are many possible target images which are geometric information in the form of a coarse volumetric es-
consistent with the source image and camera pose. This timate is employed. [10] can work with a sparse set of views
task cannot be solved with a reconstruction objective alone, but requires an accurate photogrammetric model. Subse-
as it will either lead to averaging, and hence blurry synthe- quent work also analyzed the commonalities and trade-offs
sis results, or, when combined with an adversarial objec- of these approaches [5]. Ideally, an approach could syn-
tive, cause a significant mode-dropping when modeling the thesize novel views from a single image without having to
target distribution. To remedy these issues, we propose to rely on accurate geometric models of the scene and early
model this task with a powerful, autoregressive transformer, works on deep learning for NVS explored the possibility
trained to maximize the likelihood of the target data. to directly predict novel views [14, 15, 36, 78, 68] or their
Explicit vs. Implicit Geometry The success of transform- appearance flows [82, 50, 66] with convolutional neural net-
ers is often attributed to the fact that they enforce less in- works (CNNs). However, results of these methods were
ductive biases compared to convolutional neural networks limited to simple or synthetic data and subsequent works
(CNNs), which are biased towards local context. Relying combined geometric approaches with CNNs.
mainly on CNNs, this locality-bias required previous ap- Among these deep learning approaches that explicitly
proaches for NVS to explicitly model the overall geometric model geometry, we can distinguish between approaches
transformation, thereby enforcing yet another inductive bias relying on a proxy geometry to perform a warping into the
regarding the three dimensional structure. In contrast, by target view, and approaches predicting a 3D representation
modeling interactions between far-flung regions of source that can subsequently be rendered in novel views. For the
and target images, transformers have the potential to learn to proxy geometry, [45] relies on point clouds obtained from
represent the required geometric transformation implicitly structure from motion (SfM) [1, 60] and multi-view stereo
without requiring such hand engineered operations. This (MVS) [61, 20]. To perform the warping, [19, 77] use
raises the question whether it is at all necessary to explic- plane-sweep volumes, [34] estimates depth at novel views
itly include such biases in a transformer model. To address and [8, 76] a depth probability volume. [56, 57] post-
this question, we perform several experiments with varying process MVS results to a global mesh and [28] relies on
degrees of inductive bias and find that our autoregressively per-view meshes [29]. Other approaches learn 3D features
trained transformer model is indeed capable of learning this per scene, which are associated with a point cloud [2] or UV
transformation completely without built-in priors and can maps [70], and decoded to the target image using a CNN.
even learn to predict depth in an unsupervised fashion. However, all of these approaches rely on multi-view inputs
To summarize our contributions, we (i) propose to learn to obtain an estimate for the proxy geometry.
a probabilistic model for single view synthesis that prop- Approaches which predict 3D representations mainly utilize
erly takes into account the uncertainties inherent in the task layered representations such as layered depth images (LDIs)
and show that this leads to significant benefits over previ- [62, 26, 27], multi-plane images (MPIs) [67, 81, 64, 18]
ous state-of-the-art approaches when modeling large cam- and variants thereof [51, 38]. While this allows an efficient
era transformations; see Fig. 1. We (ii) also analyze the rendering of novel views from the obtained representations,
need for explicit 3D inductive biases in transformer archi- their layered nature limits the range of novel views that
tectures for the task of NVS with large viewpoint changes can be synthesized with them. Another emerging approach
and find that transformers make it obsolete to explicitly [46] represents a five dimensional light field directly with
code 3D transformations into the model and instead can a multi-layer-perceptron (MLP), but still requires a large
learn the required transformation implicitly themselves. We number of input views to correctly learn this MLP.
also (iii) find that the benefits of providing them geometric
information in the form of explicit depth maps are relatively In the case of NVS from a single view, SfM approaches can-
small, and investigate the ability to recover an explicit depth not be used to estimate proxy geometries and early works
representation from the layers of a transformer which has relied on human interaction to obtain a scene model [31].
learned to represent the geometric transformation implicitly [65] uses a large scale, scene-specific light field dataset to
and without any depth supervision. learn CNNs which predict light fields from a single image.
[41] assumes that scenes can be represented by a fixed set of
planar surfaces. To handle more general scenes, most meth-
2. Related Work ods rely on monocular depth estimation [54, 21, 22, 80, 23]
Novel View Synthesis (NVS) We can identify three seminal to predict warps [48, 75, 40] or LDIs [11, 35, 63]. [71]
works which illustrate different levels of reliance on geom- directly predicts an MPI, and [32] a mesh. To handle disoc-
etry to synthesize novel views. [37] describes an approach clusions, most of these methods rely on adversarial losses,
which requires no geometric model, but requires a large inspired by generative adversarial networks (GANs) [24], to

2
explicit requires warped
variant f (xsrc , T )
warp depth features
expl.-img ✓ ✓ xsrc Eq.(9)
expl.-feat ✓ ✓ E(xsrc ) Eq.(10)
expl.-emb ✓ ✓ e(E(xsrc )), epos Eq.(11)
impl.-catdepth ✗ ✓ – Eq.(12)
impl.-depth ✗ ✓ – Eq.(13)
impl.-nodepth ✗ ✗ – Eq.(14)
hybrid ✓ ✓ e(E(xsrc )), epos Eq.(11)+Eq.(13)

Figure 2. We formulate novel view synthesis as sampling from the distribution p(xdst |xsrc , T ) of target images xdst for a given source image
xsrc and camera change T . We use a VQGAN to model this distribution autoregressively with a transformer and introduce a conditioning
function f (xsrc , T ) to encode inductive biases into our model. We analyze explicit variants, which estimate scene depth d and warp source
features into the novel view, as well as implicit variants without such a warping. The table on the right summarizes the variants for f .

perform inpainting in these regions. However, the quality 3. Approach


of these approaches quickly degrades for larger viewpoint
changes because they do not model the uncertainty of the To render a given image xsrc experienceable in a 3D man-
task. While adversarial losses can remedy an averaging ef- ner, we allow the specification of arbitrary new viewpoints,
fect over multiple possible realizations to some degree, our including in particular large camera transformations T . As
empirical results show the advantages of properly modeling a result we expect multiple plausible realizations xdst for the
the probabilistic nature of NVS from a single image. novel view, which are all consistent with the input, since this
problem is highly underdetermined. Consequently, we fol-
Self-Attention and Transformers The transformer [73] is
low a probabilistic approach and sample novel views from
a sequence-to-sequence model that models interactions be-
the distribution
tween learned representations of sequence elements by the
so-called attention mechanism [4, 49]. Importantly, this \xdst \sim p(\xdst \vert \xsrc , \cam ). \label {eq:dstdistr} (1)
mechanism does not introduce locality biases such as those
present in e.g. CNNs, as the importance and interactions of To solve this task, a model must explicitly or implicitly
sequence elements are weighed regardless of their relative learn the 3D relationship between both images and T . In
positioning. We build our autogressive transformer from the contrast to most previous work that tries to solve this task
GPT-2 architecture [52], i.e. multiple blocks of multihead with CNNs and therefore oftentimes includes an explicit 3D
self-attention, layer norm [3] and position-wise MLP. transformation, we want to use the expressive transformer
Generative Two Stage Approaches Our approach is based architecture and investigate to what extent the explicit spec-
on work in conditional generative modeling combined with ification of such a 3D model is necessary at all.
neural discrete representation learning (VQVAE) [72]. The Sec. 3.1 describes how to train a transformer model in the
latter aims to learn discrete, compressed representations latent space of a VQGAN. Next, Sec. 3.2 shows how induc-
through either vector quantization or soft relaxation of the tive biases can be build into the transformer and describes
discrete assignment [44, 33]. This paradigm provides a suit- all bias-variants that we analyze. Finally, Sec. 3.3 presents
able space [59, 13, 7] to train autoregressive (AR) likelihood our approach to extract geometric information from a trans-
models on the latent representations and has been utilized to former where no 3D bias has been explicitly specified.
train generative models for hierarchical, class-conditional
3.1. Probabilistic View Synthesis in Latent Space
image synthesis [55], text-controlled image synthesis [53]
and music generation [12], and continuous analogues us- Learning the distribution in Eq. (1) requires a model which
ing VAEs [9] or normalizing flows [16] exist. Recently, can capture long-range interactions between source and
[17] demonstrated that adversarial training of the VQVAE target view to implicitly represent geometric transforma-
improves compression while retaining high-fidelity recon- tions. Transformer architectures naturally meet these re-
structions, subsequently enabling efficient training of an AR quirements, since they are not confined to short-range re-
transformer model on the learned latent space (yielding a lations such as CNNs with their convolutional kernels and
so-called VQGAN). We directly build on this work and use exhibit state-of-the-art performance [73]. Since likelihood-
VQGANs to represent both source and target views and, based models have been shown [59] to spend too much ca-
when needed, depth maps. Concurrent to our work, [58] pacity on short-range interactions of pixels when modeling
develop an approach to NVS which uses a VQVAE and Pix- images directly in pixel space, we follow [17] and employ
elCNN++ [59] to outpaint large viewpoint changes. a two-stage training. The first stage performs adversarially

3
guided discrete representation learning (VQGAN), obtain- 3D points to homoegenous pixel coordinates is determined
ing an abstract latent space that has proved to be well-suited through the intrinsic camera matrix K. The transformation
for efficiently training generative transformers [17]. between source and target coordinates is given by a rigid
Modeling Conditional Image Likelihoods VQGAN con- motion, consisting of a rotation R and a translation t. To-
sists of an encoder E, a decoder G and a codebook Z = gether, these parameters specify the desired control over the
|Z|
{zi }i=1 of discrete representations zi ∈ Rdz . The trained novel view to be generated, i.e. T = (K, R, t).
VQGAN allows to encode any x ∈ RH×W ×3 into the dis- To project pixels back to 3D coordinates, we require in-
crete latent space as E(x) ∈ Rh×w×dz 1 . Unrolled in raster- formation about their depth d, since this information has
scan order, this latent representation corresponds to a se- been discarded by their projection onto the camera plane.
quence s ∈ Rh·w×dz and can be equivalently expressed as Since we assume access to only a single source view, we
a sequence of integers which index the learned codebook require a monocular depth estimate. Following by previous
Z. Following the usual designation [73] we refer to the works [63, 40], we use MiDaS [54] in all of our experiments
sequence elements as “tokens”. An embedding function which require monocular depth information.
g = g(s) ∈ Rh·w×de maps each of these tokens into the The transformation can now be described as a mapping
embedding space of the transformer T and adds learnable of pixels i ∈ {1, . . . , H}, j ∈ {1, . . . , W } in the source
positional encodings. Similarly, to encode the input view image xsrc ∈ RH×W ×3 to pixels i′ , j ′ in the target image.
xsrc and the camera transformation T , both are mapped into In homogeneous coordinates, their relationship is given by
the embedding space by a function f : \label {eq:flow} \begin {pmatrix} j' \\ i' \\ 1 \end {pmatrix} \simeq K\left ( R K^{-1} d(i,j) \begin {pmatrix} j \\ i \\ 1 \end {pmatrix} + t\right )

f: (\xsrc , \cam ) \mapsto f(\xsrc , \cam ) \in \RR ^{n \times d_e}, (2) (5)

where n denotes the length of the conditioning sequence.


By using different functions f various inductive biases This relationship defines a forward flow field F src→dst =
can be incorporated into the architecture as described in F src→dst (K, R, t, d) ∈ RH×W ×3 from source to target as
Sec. 3.2. The transformer T then processes the concate- a function of depth and camera parameters. The flow field
nated sequence [f (xsrc , T ), g(sdst )] to learn the distribution can then be used to warp the source image xsrc into the target
of plausible novel views conditioned on xsrc and T , view with a warping operation S:

p_{\transformer }\Big (\sdst | f(\xsrc , \cam ) \Big ) = \prod _i p_{\transformer }\Big (\sdst _i \vert \sdst _{<i}, f(\xsrc , \cam )\Big ). (3) x^\twrp =\splatting (\warp , x^\tsrc ). (6)

Because the target pixels obtained from the flow are not nec-
Hence, to train an autoregressive transformer by next- essarily integer valued, we follow [47] and implement S
token prediction pT (si |s<i , f (xsrc , T )) we maximize the by bilinearly splatting features across the four closest target
log-likelihood of the data, leading to the training objective pixels. When multiple source pixels map to the same target
pixels, we use their relative depth to give points closer to
\mathcal {L}_{\transformer } = \EE _{\xsrc , \xdst \sim p(\xsrc , \xdst )} \Big [-\log p_{\transformer }\big (\sdst \vert f(\xsrc , \cam )\big ) \Big ]. (4) the camera more weight—a soft variant of z-buffering.
In the simplest case, we can now describe the difference
3.2. Encoding Inductive Biases between explicit and implicit approaches in the way that
Besides achieving high-quality NVS, we aim to investigate they receive information about the source image and the de-
to what extent transformers depend on a 3D inductive bias. sired target view. Here, explicit approaches receive source
To this end, we compare approaches where a geometric information warped using the camera parameters, whereas
transformation is built explicitly into the conditioning func- implicit approaches receive the original source image and
tion f , and approaches where no such transformation is the camera parameters themselves, i.e.
used. In the latter case, the transformer itself must learn
&\text {\emph {explicit:}} & \splatting (\warp (K, R, t, d), \xsrc ) \\ &\text {\emph {implicit:}} & (K, R, t, d, \xsrc )
the required relationship between source and target view. If
successful, the transformation will be described implicitly (8)
by the transformer.
Thus, in explicit approaches we enforce an inductive bias
Geometric Image Warping We first describe how an ex-
on the 3D relationship between source and target by making
plicit geometric transformation results from the 3D relation
this relationship explicit, while implicit approaches have to
of source and target images. For this, pixels of the source
learn it on their own. Next, we introduce a number of dif-
image are back-projected to three dimensional coordinates,
ferent variants for each, which are summarized in Fig. 2.
which can then be re-projected into the target view. We as-
Explicit Geometric Transformations In the following,
sume a pinhole camera model, such that the projection of
we describe all considered variants in terms of the trans-
1 This includes the vector quantization step as described in [72] former’s conditioning function f . Furthermore, e denotes a

4
Table 1. To assess the effect of encoding different degrees of 3D prior knowledge, we evaluate all variants on RealEstate and ACID using
negative log-likelihood (NLL), FID [30] and PSIM [79], PSNR and SSIM [74]. We highlight best, second best and third best scores.
RealEstate10K ACID
method FID ↓ NLL ↓ PSIM ↓ SSIM ↑ PSNR ↑ FID ↓ NLL ↓ PSIM ↓ SSIM ↑ PSNR ↑
impl.-nodepth 48.59 4.956 3.17 ±0.43 0.42 ±0.13 12.16 ±2.54 42.88 5.365 ±0.007 2.90 ±0.53 0.40 ±0.15 15.17 ±3.40
hybrid 48.84 4.913 3.09 ±0.46 0.44 ±0.13 12.51 ±2.69 44.47 5.341 ±0.008 2.83 ±0.54 0.41 ±0.15 15.54 ±3.52
impl.-depth 49.15 4.836 3.05 ±0.46 0.44 ±0.13 12.66 ±2.68 42.93 5.353 ±0.011 2.86 ±0.52 0.41 ±0.15 15.53 ±3.34
expl.-img 49.63 4.924 3.18 ±0.46 0.43 ±0.13 12.11 ±2.66 47.72 5.414 ±0.006 3.00 ±0.51 0.40 ±0.14 14.83 ±3.20
impl.-catdepth 50.04 4.860 3.10 ±0.45 0.43 ±0.13 12.43 ±2.66 47.44 5.350 ±0.004 2.86 ±0.55 0.42 ±0.15 15.54 ±3.57
expl.-emb 50.35 5.004 3.15 ±0.45 0.43 ±0.13 12.30 ±2.66 47.08 5.416 ±0.007 2.88 ±0.54 0.42 ±0.15 15.45 ±3.61
expl.-feat 54.82 5.159 3.31 ±0.43 0.41 ±0.13 11.75 ±2.58 52.65 5.657 ±0.003 3.14 ±0.52 0.38 ±0.15 14.06 ±3.28

learnable embedding mapping the discrete VQGAN codes Rde ×1 to the embedding space. Depth and source images
E(x) into the embedding space of the transformer. Simi- are encoded by VQGAN encoders E d and E to obtain
larly, epos ∈ Rn×de denotes a learnable positional encod-
\label {eq:fvariv} f(\xsrc , \cam ) = [\conv ^\tcam \hat {\cam }, \learnemb (\encoder ^\tdepth (d)), \learnemb (\encoder (\xsrc ))] + \learnpos (12)
ing. The flow field F src→dst (K, R, t, d) is always com-
puted from xsrc and, to improve readability, we omit it Compared to the other variants, this sequence is roughly 32
from the arguments of the warping operation, i.e. S(·) = times longer, resulting in twice the computational costs.
S(F src→dst (K, R, t, d), ·). (V) Therefore, we also include a impl.-depth variant,
(I) Our first explicit variant, expl.-img, warps the source which concatenates the discrete codes of depth and source
image and encodes it in the same way as the target image: image, and maps them with a matrix W ∈ Rde ×2·dz to the
embedding space to avoid an increase in sequence length:
\label {eq:fvari} f(\xsrc , \cam ) = \learnemb (\encoder (\splatting (\xsrc )))+\learnpos (9)
\label {eq:fvarv} f(\xsrc , \cam ) = [\conv ^\tcam \hat {\cam }, W[\encoder ^\tdepth (d), \encoder (\xsrc )]] +\learnpos (13)
(II) Inspired by previous works [56, 2] we include a (VI) Implicit approaches offer an intriguing possibility:
expl.-feat variant which first encodes the original source im- Because they do not need an explicit estimate of the depth
age, and subsequently applies the warping on top of these to perform the warping operation S, they hold the potential
features. We again use the VQGAN encoder E to obtain to solve the task without such a depth estimate. Thus, impl.-
nodepth uses only camera parameters and source image—
\label {eq:fvarii} f(\xsrc , \cam ) = \learnemb (\splatting (\encoder (\xsrc )))+\learnpos (10) the bare minimum according to our task description.
(III) To account for the fact that the warped features in \label {eq:fvarvi} f(\xsrc , \cam ) = [\conv ^\tcam \hat {\cam }, \learnemb (\encoder (\xsrc ))] + \learnpos (14)
Eq. (10) remain fixed (due to E being frozen), we also con- (VII) Finally, we analyze if explicit and implicit ap-
sider a expl.-emb variant that warps the learnable embed- proaches offer complementary strengths. Thus, we add a
dings and positional encodings of the transformer model. hybrid variant whose conditioning function is the sum of
More precisely, we concatenate original embeddings with the f ’s of expl.-emb in Eq. (11) and impl.-depth in Eq. (13).
their warped variants and merge them with a learnable ma-
trix. Doing this for both the embeddings of the codes and 3.3. Depth Readout
for the positional encodings using matrices W emb , W pos ∈ To investigate the ability to learn an implicit model of the
Rde ×2·de , the conditioning function f then reads: geometric relationship between different views, we pro-
pose to extract an explicit estimate of depth from a trained
\label {eq:fvariii} f(\xsrc , \cam ) = &W^\temb [ \learnemb (\encoder (\xsrc )), \splatting (\learnemb (\encoder (\xsrc ))) ] +\nonumber \\ &W^\tpos [ \learnpos , \splatting (\learnpos ) ] model. To do so, we use linear probing [6], which is com-
(11) monly used to investigate the feature quality of unsuper-
vised approaches. More specifically, we assume a trans-
Implicit Geometric Transformations Next, we describe former model consisting of L layers and of type impl.-
implicit variants that we use to analyze if transformers— nodepth, which is conditioned on source frame and trans-
with their ability to attend to all positions equally well— formation parameters only. Next, we specify a certain layer
require an explicit geometric transformation built into the 0 ≤ l ≤ L (where l = 0 denotes the input) and extract its la-
model. We use the same notation as for the explicit variants. tent representation el , corresponding to the positions of the
(IV) The first variant, impl.-catdepth, provides the trans- provided source frame xsrc . We then train a position-wise
former with all the same components which are used in linear classifier W to predict the discrete, latent represen-
the explicit variants: Camera parameters K, R, t, estimated tation of the depth-encoder E d (see Sec. 3.2) via a cross-
depth d and source image xsrc . Camera parameters are flat- entropy objective from el . Note that both the weights of the
tened and concatenated to T̂ , which is mapped via W cam ∈ transformer and the VQGANs remain fixed.

5
Figure 3. Average reconstruction error of the best sample as a function of the number of samples on RealEstate. With just four samples,
impl.-depth reaches state-of-the-art performance in two out of three metrics, and with 16 samples in all three of them.

4. Experiments tion obtained from a supervised, monocular depth estima-


First, Sec. 4.1 integrates the different explicit and implicit tion model are much more beneficial in the indoor setting
inductive biases into the transformer to judge if such geo- of RealEstate compared to the outdoor setting of ACID.
metric biases are needed at all. Following up, Sec. 4.2 com- Visualizing Entropy of Predictions The NLL measures the
pares implicit variants to previous work and evaluates both ability of the transformer to predict target views. The en-
the visual quality and fidelity of synthesized novel views. tropy of the predicted distribution over the codebook en-
Finally, we evaluate the ability of the least biased variant, tries for each position captures the prediction uncertainty of
impl.-nodepth, to implicitly represent scene geometry, ob- the model. See Fig. 4 for a visualization of variant impl.-
serving that they indeed capture such 3D information. nodepth. The model is more confident in its predictions for
regions which are visible in the source image. This indicates
4.1. Comparing Implicit and Explicit Transformers that it is indeed able to relate source and target via their ge-
To investigates if transformers need (or benefit from) an ex- ometry instead of simply predicting an arbitrary novel view.
plicit warping between source and target view we first com- Measuring Image Quality and Fidelity Since NLL does
pare how well the different variants from Sec. 3.2 (see also not necessarily reflect the visual quality of the images [69],
Fig. 2) can learn a probabilistic model for NVS. We then we evaluate the latter also directly. Comparing predictions
evaluate both the quality and fidelity of their samples. with ground-truth helps to judge how well the model re-
To prepare, we first train VQGANs on frames of the spects the geometry. However, for large camera move-
RealEstate10K [81] and ACID [40] datasets, whose prepa- ments, large parts of the target image are not visible in the
ration is described in the supplementary. We then train the source view. Thus, we must also evaluate the quality of the
various transformer variants on the latent space of the re- content imagined by the model, which might be fairly dif-
spective first stage models. Note that this procedure ensures ferent from that of the ground-truth, since the latter is just
comparability of different settings within a given dataset, as one of many possible realizations of the real-world.
the space in which the likelihood is measured remains fixed. To evaluate the image quality without a direct comparison
Comparing Density Estimation Quality A basic measure to the ground-truth, we report FID scores [30]. To evalu-
for the performance of probabilistic models is the likelihood ate the fidelity to the ground-truth, we report the low-level
assigned to validation data. Hence, we begin our evaluation similarity metrics SSIM [74] and PSNR, and the high-level
of the different variants by comparing their (minimal) neg- similarity metric PSIM [79], which better represents human
ative log-likelihood (NLL) on RealEstate and ACID. Based assessments of visual similarity. Tab. 1 contains the results
on the results in Tab. 1, we can identify three groups with for RealEstate10K and ACID. In general, they reflect the
significant performance differences on ACID: The implicit findings from the NLL values: Image quality and fidelity of
variants impl.-catdepth, impl.-depth, and impl.-nodepth and
xsrc xdst T ’s entropy
hybrid achieve the best performance, which indicates an ad-
vantage over the purely explicit variants. Adding an explicit
warping as in the hybrid model does not help significantly.
Moreover, expl.-feat is unfavorable, possibly due to the
features E(xsrc ) remaining fixed while training the trans-
former. The learnable features which are warped in variant
expl.-emb obtain a lower NLL and thereby confirm the for-
mer hypothesis. Still there are no improvements of warped
features over warped pixels as in variant expl.-img. Figure 4. Visualization of the entropy of the predicted target code
The results on RealEstate look similar but in this case the distribution for impl.-nodepth. Increased confidence (darker col-
implicit variant without depth, impl.-nodepth, performs a bit ors) in regions which are visible in the source image indicate its
worse than expl.-img. Presumably, accurate depth informa- ability to relate source and target geometrically, without 3D bias.

6
Source Target 3DPhoto [63] SynSin [75] expl.-det impl.-depth impl.-nodepth

Figure 5. Qualitative Results on RealEstate10K: We compare three deterministic convolutional baselines (3DPhoto [63], SynSin [75],
expl.-det) to our implicit variants impl.-depth and impl.-nodepth. Ours is able to synthesize plausible novel views, whereas others produce
artifacts or blurred, uniform areas. The depicted target is only one of many possible realizations; we visualize samples in the supplement.

Table 2. Quantitative comparison on RealEstate. Reconstruction Table 3. Quantitative comparison on ACID using 32 samples for
metrics are reported with 32 samples, see Fig. 3 for other values. reconstruction metrics. (n): number of steps used for InfNat [40].
method FID ↓ IS ↑ PSIM ↓ SSIM ↑ PSNR ↑ method FID ↓ IS ↑ PSIM ↓ SSIM ↑ PSNR ↑
impl.-nodepth 48.59 4.24 ±0.30 2.95 ±0.43 0.49 ±0.12 14.06 ±2.41 impl.-nodepth 42.88 2.63 ±0.14 2.77 ±0.54 0.46 ±0.14 16.49 ±3.33
impl.-depth 49.15 4.17 ±0.52 2.86 ±0.45 0.50 ±0.12 14.47 ±2.51 impl.-depth 42.93 2.62 ±0.23 2.73 ±0.53 0.46 ±0.14 16.80 ±3.24
expl.-det 66.66 4.47 ±0.49 2.97 ±0.55 0.42 ±0.15 13.60 ±2.56 expl.-det 53.77 2.60 ±0.18 2.72 ±0.56 0.41 ±0.16 16.60 ±3.43
3DPhoto [63] 85.43 5.10 ±0.39 3.20 ±0.54 0.49 ±0.12 12.80 ±2.33 InfNat [40](5) 76.07 2.44 ±0.21 3.28 ±0.47 0.39 ±0.15 15.24 ±2.87
SynSin [75] 113.88 3.70 ±0.30 3.30 ±0.51 0.47 ±0.13 12.87 ±2.46 3DPhoto [63] 76.17 3.50 ±0.47 3.01 ±0.64 0.45 ±0.14 14.87 ±3.08
MiDaS [54] 132.13 5.63 ±0.77 3.38 ±0.56 0.46 ±0.15 13.09 ±2.16 InfNat [40](1) 79.00 2.71 ±0.23 3.11 ±0.58 0.42 ±0.15 15.35 ±3.50
InfNat [40](10) 88.81 2.52 ±0.20 3.44 ±0.41 0.35 ±0.14 14.32 ±2.55
MiDaS [54] 106.10 3.62 ±0.36 3.11 ±0.68 0.45 ±0.15 14.82 ±2.85
implicit variants with access to depth are superior to explicit
variants. The implicit variant without depth (impl.-nodepth)
larity metrics PSIM, SSIM and PSNR on RealEstate, and
consistently achieves the same good FID scores as the im-
are always among the best three on ACID, where expl.-det
plicit variants with depth (impl.-catdepth & impl.-depth),
achieves the best PSIM values and the second best PSNR
but cannot achieve quite the same level of performance in
values. We show the reconstruction metrics on RealEstate
terms of reconstruction fidelity. However, it is on par with
as a function of the number of samples in Fig. 3. With just
the explicit variants, albeit requiring no depth supervision.
four samples, the performance of impl.-depth is better than
4.2. Comparison to Previous Approaches all other approaches except for the SSIM values of 3DPhoto
Next, we compare our best performing variants impl.- [63], which are overtaken by impl.-depth with 16 samples,
depth and impl.-nodepth to previous approaches for NVS: and do not saturate with 32 samples, which demonstrates
3DPhoto [63], SynSin [75] and InfNat [40]. 3DPhoto [63] the advantages of a probabilistic formulation of NVS.
has been trained on MSCOCO [39] to work on arbitrary These results should be considered along with the compet-
scenes, whereas SynSin [75] and InfNat [40] have been itive FID scores in Tab. 2 and 3 (where the implicit variants
trained on RealEstate and ACID, respectively. always constitute the best and second best value) and the
To assess the effect of formulating the problem probabilisti- qualitative results in Fig. 5 and 6, underlining the high qual-
cally, we introduce another baseline to compare probabilis- ity of our synthesized views. It is striking that IS assigns the
tic and deterministic models with otherwise equal architec- best scores to 3DPhoto [63] and MiDaS [54], which con-
tures. Specifically, we use the same VQGAN architecture tain large and plain regions of gray color in regions where
as described in Sec. 3.1. However, it is not trained as an the source image does not provide information about the
autoencoder, but instead the encoder receives the warped content. Where the monocular depth estimation is accurate,
source image xwrp , and the decoder predicts the target image 3DPhoto [63] shows good results but it can only inpaint
xdst . This model, denote by expl.-det, represents an explicit small areas. SynSin [75] and InfNat [40] can fill larger areas
and deterministic baseline. Finally, we include the warped but, for large camera motions, their results become blurry
source image itself as a baseline denoted by MiDaS [54]. and a similar observation holds for expl.-det. The proba-
Utilizing the probabilistic nature of our model, we analyze bilistic variants impl.-depth and impl.-nodepth consistently
how close we can get to a particular target image with a produce plausible results which are largely consistent with
fixed amount of samples. Tab. 2 and 3 report the reconstruc- the source image, although small details sometimes differ.
tion metrics with 32 samples per target. The probabilistic This shows that only the probabilistic variants are able to
variants consistently achieve the best values for the simi- synthesize high quality images for large camera changes.
7
Source Target 3DPhoto [63] InfNat [40] expl.-det impl.-depth impl.-nodepth

Figure 6. Qualitative Results on ACID: The outdoor setting of the ACID dataset yields similar results as the indoor setting in Fig. 5.
Here, we evaluate against the baselines 3DPhoto [63], InfNat [40] and expl.-det. For InfNat [40], we use 5 steps to synthesize a novel view.

Input Layer #0 Layer #1 Layer #4 Depth Rec.

Figure 7. Minimal validation loss and reconstruction quality of


depth predictions obtained from linear probing as a function of Figure 8. Linearly probed depth maps for different transformer
different transformer layers. The probed variant is impl.-nodepth. layers. The results mirror the curve in Fig. 7: After a strong initial
increase, the quality for layer 4 is best. The depth reconstructions
4.3. Probing for Geometry in the right column provide an upper bound on achievable quality.
Based on the experiments in Sec. 4.1 and Sec. 4.2, which
plicit and implicit 3D inductive biases for the transformer
showed that the unbiased variant impl.-nodepth is mostly
showed that explicitly using a 3D transformation in the
on-par with the others, we investigate the question whether
architecture does not help their performance significantly.
this model is able to develop an implicit 3D “understand-
However, removing inductive biases also comes at a price.
ing” without explicit 3D supervision. To do so, we perform
Without priors on camera movements or warping layers, the
linear probing experiments as described in Sec. 3.3.
architecture must be able to take relationships between ar-
Fig. 7 plots the negative cross-entropy loss and the nega-
bitrary positions into account, which requires a compressed
tive PSIM reconstruction error of the recovered depth maps
representation. In our experiments, compression artifacts
against the layer depth of the transformer model. Both
dominate the error for small viewpoint changes. Avoiding
metrics are consistent and quickly increase when probing
them increases computational costs (see Sec. E). Synthe-
deeper representations of the transformer model. Further-
sizing two views from the same image generally results in
more, both curves exhibit a peak for l = 4 (i.e. after the third
two incompatible realizations. However, we can run our
self-attention block) and then slowly decrease with increas-
approach iteratively. When synthesizing continuous trajec-
ing layer depth. The depth maps obtained from this linear
tories, sampling still leads to flickering but this can be alle-
map resemble the corresponding true depth maps qualita-
viated with deterministic sampling (see Sec. A).
tively well as shown in Fig. 7. This figure demonstrates that
To conclude, our approach is not a final solution to novel
a linear estimate of depth only becomes possible through
view synthesis, but an important step towards synthesizing
the representation learned by the transformer (l = 4) but
large camera changes and understanding the need for 3D
not by the representation of the VQGAN encoder (l = 0).
priors. Our results demonstrate significant improvements
We hypothesize that, in order to map an input view onto a
over existing approaches, and even with no depth informa-
target view, the transformer indeed develops an implicit 3D
tion as input our model learns to infer depth within its in-
representation of the scene to solve its training task.
ternal representations. Future works should explore how to
5. Discussion combine these capabilities and insights with improved per-
formance at synthesizing stable high-resolution trajectories.
We have introduced a probabilistic approach based on trans-
formers for novel view synthesis from a single source image This work has been supported by the German Research Foundation
with strong changes in viewpoint. Comparing various ex- (DFG) projects 371923335 and 421703927.

8
Geometry-Free View Synthesis
Transformers and no 3D Priors

Supplementary Material

In this supplementary, we provide additional results obtained with our models in Sec. A. Sec. B summarizes models, ar-
chitectures and hyperparameters that were used in the main paper. After describing details on the training and test data in
Sec. C and on the uncertainty evaluations via the entropy in Sec. D, Sec. E concludes the supplementary material with a brief
discussion of the compression artifacts introduced by the usage of the VQGAN as the compression model.

A. Additional Results
Interactive Scene Exploration Interface Fig. 9 shows a preview of the videos available at https://git.io/JOnwn,
which demonstrate an interface for interactive 3D exploration of images. Starting from a single image, a user can use
keyboard and mouse to move the camera freely in 3D. To provide orientation, we warp the starting image to the current view
using a monocular depth estimate (corresponding to the MiDaS [54] baseline in Sec. 4.2). This enables a positioning of the
camera with real-time preview of the novel view. Once a desired camera position has been reached, the spacebar can be
pressed to autoregressively sample a novel view with our transformer model.
In the videos available at https://git.io/JOnwn, we use camera trajectories from the test sets of RealEstate10K
and ACID, respectively. The samples are produced by our impl.-depth model, and for an additional visual comparison, we
also include results obtained with the same methods that we compared to in Sec. 4.2.

Figure 9. Preview of the videos available at https://git.io/JOnwn, which demonstrate an interface for interactive 3D exploration
of images. Starting from a single image, it allows users to freely move around in 3D. See also Sec. A.

Small Viewpoint Changes & Continuous Trajectories For very small viewpoint changes, distortions due to compression
dominate the error of our approach (left of Fig. 10, where the x-axis uses PSIM between source and target view as a proxy
for the difficulty/magnitude of viewpoint change). Still, our approach outperforms previous approaches when considering the
average over small, medium and large viewpoint changes (solid lines at the left) and the gap quickly increases when consid-
ering more difficult examples (solid lines, right). See also Sec. E on the trade-off between distortion caused by compression
and computational efficiency. Our approach can also be applied to small viewpoint changes and the generation of continuous,
consistent trajectories; see Fig. 11. Note that the ability of previous approaches to synthesize small viewpoint changes well
does not enable high-quality synthesis of even moderately long trajectories.

Transformer Variants Over the Course of Training Fig. 12 reports the negative log-likelihood (NLL) over the course of
training on RealEstate and ACID, respectively. The models overfit to the training split of ACID early which makes training
on ACID much quicker and thus allows us to perform multiple training runs of each variant with different initializations. This

9
Figure 10. Comparison on small and large viewpoint changes.

Figure 11. Continuous trajectories: Ours (top) & SynSin (bottom).

enables an estimate of the significance of the results by computing the mean and standard deviation over three runs (solid line
and shaded area in Fig. 12, respectively).

Additional Qualitative Results For convenience, we also include additional qualitative results directly in this supplemen-
tary. Fig. 15 and 17 show additional qualitative comparisons on RealEstate10K and ACID, as in Fig. 5 and 6 of the main
paper. Fig. 16 and 18 demonstrate the diversity and consistency of samples by showing them along with their pixel-wise
standard deviation. Fig. 19 contains results from the depth-probing experiment of Sec. 4.3, and Fig. 13 from the entropy
visualization of Sec. 4.1.

10
RealEstate10K ACID
expl. img 6.80 expl. img
6.50
expl. feat 6.60 expl. feat
6.25 expl. emb expl. emb
impl. depth 6.40 impl. depth
6.00 impl. catdepth impl. catdepth
6.20
5.75 impl. nodepth impl. nodepth
NLL

NLL
hybrid 6.00 hybrid
5.50
5.80
5.25
5.60
5.00 5.40
0 20 40 60 80 100 0 50 100 150 200
Epoch Epoch

Figure 12. Negative log-likelihood over the course of training on RealEstate10K (left) and ACID (right). Implicit variants achieve the best
results, see Sec. 4.1 for a discussion.

B. Architectures & Hyperparameters


Transformer The architecture of all transformer models discussed in this work follows the GPT-2 architecture [52]. More
specifically, the transformer consists of L transformer blocks, where each block performs the following operation on an input
sequence z ∈ R|z|×de (with |z| the length of z):

z_1 &= \text {LayerNorm}(z) \\ z_2 &= \text {MultiHeadSelfAttention}(z_1) + z \\ z_3 &= \text {LayerNorm}(z_2) \\ z_4 &= \text {MLP}(z_3) + z_2

(18)

In contrast to the global attention operation, the MLP is applied position-wise.


Given an input sequence s and an embedding produced by the conditioning function f (xsrc , T ) ∈ Rn×de (see Sec. 3.2),
the transformer maps s to a learnable embedding e(s) + epos =: ê0 ∈ R|s|×de , applies the L transformer blocks on the con-
catenated sequence [f (xsrc , T ), ê0 ] and finally projects to |Z| logits π L via a linear transformation Whead , which correspond
to a categorical distribution over sequence elements, i.e.

e^0 &= [f(\xsrc , \cam ), e(s) + e^{\text {pos}}] \\ e^l &= \text {TransformerBlock}(e^{l-1}), \quad l=1\dots L \\ \pi ^L &= W_{\text {head}} \cdot \text {LayerNorm}(e^L).

(21)

Note that non-conditioning elements, i.e. the last |s| elements, are masked autoregressively [73]. For all experiments, we
use an embedding dimensionality de = 1024, L = 32 transformer blocks, 16 attention heads, two-layer MLPs with hidden
dimensionalities of 4 · de and a codebook of size |Z| = 16384. This setting results in a transformer with 437M parameters.
We train the model using the AdamW [43] optimizer (with β1 = 0.9, β2 = 0.95) and apply weight decay of 0.01 on non-
embedding parameters. We train for 500k steps, where we first linearly increase the learning rate from 2.5 · 10−6 to 1.5 · 10−4
during the first 5k steps, and then apply a cosine-decay learning rate schedule [42] towards zero.

VQGAN The architecture and training procedure of the VQGANs is adopted from [17], where we use a downsampling
factor of f = 24 . For the codebook Z, we use |Z| = 16384 entries and a dimensionality of dz = 256. This means that
4 4
any input x ∈ RH×W ×C will be mapped to a latent representation of size E(x) ∈ RH/2 ×W/2 ×256 . For our experiments
on RealEstate and ACID, where H = 208 and W = 368, this corresponds to a latent code of size 13 × 23 (which is then
unrolled to a sequence of length |s| = 299). We use the authors’ official implementation and pretrained models2 and perform
finetuning on frames of RealEstate and ACID for 50’000 steps on either dataset, resulting in two dataset-specific VQGANs.
2 see https://github.com/CompVis/taming-transformers

11
Other models For monocular depth estimates, we use MiDaS v2.13 . We use the official implementations and pretrained
models for the comparison with 3DPhoto [63]4 , SynSin [75]5 and InfNat [40]6 .

C. Training and Testing Data


Training our conditional generative model requires examples consisting of (xdst , xsrc , T ). Such training pairs can be obtained
via SfM [81] applied to image sequences, which provides poses (Ri , ti ) for each frame xi with respect to an arbitrary world
−1
coordinate system. For two frames xsrc , xdst from the sequence, the relative transformation is then given by R = Rdst Rsrc
and t = tdst − Rtsrc . However, the scale of the camera translations obtained by SfM is also arbitrary, and without access to
the full sequence, underspecified.
To train the model and to meaningfully compute reconstruction errors for the evaluation, we must resolve this ambiguity. To
do this, we also triangulate a sparse set of points for each sequence using COLMAP [60]. We then compute a monocular
depth estimate for each image using MiDaS [54] and compute the optimal affine scaling to align this depth estimate with the
scale of the camera pose. Finally, we normalize depth and camera translation by the minimum depth estimate.
All qualitative and quantitative results are obtained on a subset of the test splits of RealEstate10K [81] and ACID [40],
consisting of 564 source-target pairs, which have been selected to contain medium-forward, large-forward, medium-backward
and large-backward camera motions in equal parts. We will make this split publicly available along with our code.
A Note on Reported Metrics
Since our stated goal is to model large camera transformations, our evaluation focuses on this ability and thus differs from the
evaluation of SynSin, which is biased to small changes; see Tab. 4: With the SynSin evaluation (small cam-∆) we reproduce
the officially reported numbers and our choice to evaluate at the original aspect ratio (at 208 × 368) has minor effects. The
last three rows show the deterioration of metrics if we remove biases to small changes ∆: We (i) evaluate all test pairs, not
just the better of two views (w/o best of 2), (ii) remove 10.14% of test pairs that contain no camera change at all (w/o src ≡
tgt) and (iii) add 50% of larger viewpoint changes (w/ 50% large). The main paper reports results at 100% medium and large
viewpoint changes, but we include an additional analysis with small changes in Fig. 10.

Model PSIM ↓ PSNR ↑ SSIM ↑


SynSin (small cam-∆) 1.13 ± 0.54 23.03 ± 4.54 0.77 ± 0.12
SynSin (at 208 × 368) 1.32 ± 0.53 22.39 ± 4.24 0.76 ± 0.12
SynSin (w/o best of 2) 1.72 ± 0.72 19.75 ± 4.69 0.67 ± 0.16
SynSin (w/o src ≡ tgt) 1.84 ± 0.66 18.86 ± 3.98 0.64 ± 0.15
SynSin (w/ 50% large) 2.48 ± 0.91 16.05 ± 3.99 0.57 ± 0.13
Table 4. Effects of removing evaluation biases to small changes.

3 see https://github.com/intel-isl/MiDaS
4 see https://github.com/vt-vl-lab/3d-photo-inpainting
5 see https://github.com/facebookresearch/synsin/
6 see https://github.com/google-research/google-research/tree/master/infinite_nature

12
D. Details on Entropy Evaluation
xsrc xdst T ’s entropy xsrc xdst T ’s entropy

Figure 13. Additional visualizations of the entropy of the predicted target code distribution for impl.-nodepth. Increased confidence (darker
colors) in regions which are visible in the source image indicate its ability to relate source and target geometrically, without 3D bias. See
also Sec. 4.1 and Sec. D.

As discussed in Sec. 4.1, the relationship between a source view xsrc and a target view xdst can be quantified via the entropy of
the probability distribution that the transformer assigns to a target view xdst , given a source frame xsrc , camera transformation
T and conditioning function f . More specifically, we first encode target, camera and source via the encoder E and the
conditioning function f (see Sec. 3.1), i.e. sdst = E(xdst ) and f (xsrc , T ). Next, for each element in the sequence sdst , the
(trained) transformer assigns a probability conditioned on the source and camera:

p\Big (\sdst _i \vert \sdst _{<i}, f(\xsrc , \cam )\Big ), \quad 0 \leq i < \vert \sdst \vert (22)

where for our experiments the length of the target sequence is always |sdst | = 13 · 23 = 299, see also Sec. B. The entropy
H(sdst src
i , x ) for each position i is then computed as

\mathbb {H}(\sdst _i, \xsrc ) ) = -\sum _k^{\vert \codebook \vert } p_k(\sdst _i \vert \sdst _{<i}, f(\xsrc , \cam )) \log p_k(\sdst _i \vert \sdst _{<i}, f(\xsrc , \cam )) (23)

Reshaping to the latent dimensionality h × w and bicubic upsampling to the input’s size H × W then produces the visualiza-
tions of transformer entropy as in Fig. 4 and Fig. 13. Note that this approach quantifies the transformers uncertainty/surprise
from a single example only and does not need to be evaluated on multiple examples.

13
E. Faithful Reconstructions/Compression Artifacts
Efficient training of the transformer models is enabled by the strong compression achieved with the VQGAN, which to
some degree introduces artifacts but allows to trade compute requirements for reconstruction quality. Larger discrete codes
improve fidelity (see Fig. 14, Tab. 5) but a 4× larger code leads to approximately 16× larger costs when training the trans-
former (O(n2 )-complexity of attention). Reducing such artifacts is thus a matter of scaling up hardware or training time.
Additionally, it would also increase the time required to sample a novel view, which is currently 4.32 ± 0.04 seconds.

Model PSIM ↓ R-FID ↓ PSNR ↑ SSIM ↑


VQGAN f16 1.48 ± 0.37 2.46 21.14 ± 2.45 0.67 ± 0.14
VQGAN f8 0.80 ± 0.28 1.52 25.23 ± 2.96 0.82 ± 0.11
Table 5. Reconstruction metrics for different compressions.

VQGAN f8: VQGAN f16:


Inputs
208 × 368 → 26 × 46 208 × 368 → 13 × 23

Figure 14. Qualitative visualization of different compression rates.

14
Source Target 3DPhoto [63] SynSin [75] expl.-det impl.-depth impl.-nodepth

Figure 15. Additional qualitative comparisons on RealEstate10K.

15
Source σ samples of variant impl.-depth

Figure 16. Additional samples on RealEstate10K. The second column depicts the pixel-wise standard deviation σ obtained from n = 32
samples.

16
Source Target 3DPhoto [63] InfNat [40] expl.-det impl.-depth impl.-nodepth

Figure 17. Additional qualitative comparisons on ACID.

17
Source σ samples of variant impl.-depth

Figure 18. Additional samples on ACID. The second column depicts the pixel-wise standard deviation σ obtained from n = 32 samples.

18
Input Layer #0 Layer #1 Layer #4 Layer #5 Depth Rec.

Figure 19. Additional results on linearly probed depth maps for different transformer layers as in Fig. 8. See Sec. 4.3.

19
References
[1] Sameer Agarwal, Noah Snavely, Ian Simon, Steven M. Seitz, and Richard Szeliski. Building rome in a day. In ICCV, pages 72–79.
IEEE Computer Society, 2009. 2
[2] Kara-Ali Aliev, Artem Sevastopolsky, Maria Kolos, Dmitry Ulyanov, and Victor S. Lempitsky. Neural point-based graphics. In
ECCV (22), volume 12367 of Lecture Notes in Computer Science, pages 696–712. Springer, 2020. 2, 5
[3] Lei Jimmy Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. CoRR, abs/1607.06450, 2016. 3
[4] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In
ICLR, 2015. 3
[5] Chris Buehler, Michael Bosse, Leonard McMillan, Steven J. Gortler, and Michael F. Cohen. Unstructured lumigraph rendering. In
SIGGRAPH, pages 425–432. ACM, 2001. 2
[6] Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. Generative pretraining from
pixels. In ICML, volume 119 of Proceedings of Machine Learning Research, pages 1691–1703. PMLR, 2020. 5
[7] Xi Chen, Diederik P. Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel.
Variational lossy autoencoder. In ICLR (Poster). OpenReview.net, 2017. 3
[8] Inchang Choi, Orazio Gallo, Alejandro J. Troccoli, Min H. Kim, and Jan Kautz. Extreme view synthesis. In ICCV, pages 7780–7789.
IEEE, 2019. 2
[9] Bin Dai and David P. Wipf. Diagnosing and enhancing VAE models. In ICLR (Poster). OpenReview.net, 2019. 3
[10] Paul E. Debevec, Camillo J. Taylor, and Jitendra Malik. Modeling and rendering architecture from photographs: A hybrid geometry-
and image-based approach. In SIGGRAPH, pages 11–20. ACM, 1996. 1, 2
[11] Helisa Dhamo, Keisuke Tateno, Iro Laina, Nassir Navab, and Federico Tombari. Peeking behind objects: Layered depth prediction
from a single image. Pattern Recognit. Lett., 125:333–340, 2019. 2
[12] Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, and Ilya Sutskever. Jukebox: A generative model
for music. CoRR, abs/2005.00341, 2020. 3
[13] Sander Dieleman. Musings on typicality, 2020. 3
[14] Alexey Dosovitskiy, Jost Tobias Springenberg, and Thomas Brox. Learning to generate chairs with convolutional neural networks.
In CVPR, pages 1538–1546. IEEE Computer Society, 2015. 2
[15] Alexey Dosovitskiy, Jost Tobias Springenberg, Maxim Tatarchenko, and Thomas Brox. Learning to generate chairs, tables and cars
with convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell., 39(4):692–705, 2017. 2
[16] Patrick Esser, Robin Rombach, and Björn Ommer. A disentangling invertible interpretation network for explaining latent representa-
tions. In CVPR, pages 9220–9229. IEEE, 2020. 3
[17] Patrick Esser, Robin Rombach, and Björn Ommer. Taming transformers for high-resolution image synthesis. CoRR, abs/2012.09841,
2020. 3, 4, 11
[18] John Flynn, Michael Broxton, Paul E. Debevec, Matthew DuVall, Graham Fyffe, Ryan S. Overbeck, Noah Snavely, and Richard
Tucker. Deepview: View synthesis with learned gradient descent. In CVPR, pages 2367–2376. Computer Vision Foundation / IEEE,
2019. 2
[19] John Flynn, Ivan Neulander, James Philbin, and Noah Snavely. Deepstereo: Learning to predict new views from the world’s imagery.
CoRR, abs/1506.06825, 2015. 2
[20] Yasutaka Furukawa and Jean Ponce. Accurate, dense, and robust multiview stereopsis. IEEE Trans. Pattern Anal. Mach. Intell.,
32(8):1362–1376, 2010. 2
[21] Ravi Garg, B. G. Vijay Kumar, Gustavo Carneiro, and Ian D. Reid. Unsupervised CNN for single view depth estimation: Geometry
to the rescue. In ECCV (8), volume 9912 of Lecture Notes in Computer Science, pages 740–756. Springer, 2016. 2
[22] Clément Godard, Oisin Mac Aodha, and Gabriel J. Brostow. Unsupervised monocular depth estimation with left-right consistency.
In CVPR, pages 6602–6611. IEEE Computer Society, 2017. 2
[23] Clément Godard, Oisin Mac Aodha, Michael Firman, and Gabriel J. Brostow. Digging into self-supervised monocular depth estima-
tion. In ICCV, pages 3827–3837. IEEE, 2019. 2
[24] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua
Bengio. Generative adversarial nets. In NIPS, pages 2672–2680, 2014. 2
[25] Steven J. Gortler, Radek Grzeszczuk, Richard Szeliski, and Michael F. Cohen. The lumigraph. In SIGGRAPH, pages 43–54. ACM,
1996. 1, 2
[26] Peter Hedman, Suhib Alsisan, Richard Szeliski, and Johannes Kopf. Casual 3d photography. ACM Trans. Graph., 36(6):234:1–
234:15, 2017. 2
[27] Peter Hedman and Johannes Kopf. Instant 3d photography. ACM Trans. Graph., 37(4):101:1–101:12, 2018. 2
[28] Peter Hedman, Julien Philip, True Price, Jan-Michael Frahm, George Drettakis, and Gabriel J. Brostow. Deep blending for free-
viewpoint image-based rendering. ACM Trans. Graph., 37(6):257:1–257:15, 2018. 2
[29] Peter Hedman, Tobias Ritschel, George Drettakis, and Gabriel J. Brostow. Scalable inside-out image-based rendering. ACM Trans.
Graph., 35(6):231:1–231:11, 2016. 2

20
[30] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale
update rule converge to a local nash equilibrium. In NIPS, pages 6626–6637, 2017. 5, 6
[31] Youichi Horry, Ken-ichi Anjyo, and Kiyoshi Arai. Tour into the picture: using a spidery mesh interface to make animation from a
single image. In SIGGRAPH, pages 225–232. ACM, 1997. 2
[32] Ronghang Hu, Nikhila Ravi, Alexander C. Berg, and Deepak Pathak. Worldsheet: Wrapping the world in a 3d sheet for view synthesis
from a single image. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2021. 2
[33] Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. In ICLR (Poster). OpenReview.net,
2017. 3
[34] Nima Khademi Kalantari, Ting-Chun Wang, and Ravi Ramamoorthi. Learning-based view synthesis for light field cameras. ACM
Trans. Graph., 35(6):193:1–193:10, 2016. 2
[35] Johannes Kopf, Kevin Matzen, Suhib Alsisan, Ocean Quigley, Francis Ge, Yangming Chong, Josh Patterson, Jan-Michael Frahm,
Shu Wu, Matthew Yu, Peizhao Zhang, Zijian He, Peter Vajda, Ayush Saraf, and Michael F. Cohen. One shot 3d photography. ACM
Trans. Graph., 39(4):76, 2020. 2
[36] Tejas D. Kulkarni, William F. Whitney, Pushmeet Kohli, and Joshua B. Tenenbaum. Deep convolutional inverse graphics network.
In NIPS, pages 2539–2547, 2015. 2
[37] Marc Levoy and Pat Hanrahan. Light field rendering. In SIGGRAPH, pages 31–42. ACM, 1996. 1, 2
[38] Zhengqi Li, Wenqi Xian, Abe Davis, and Noah Snavely. Crowdsampling the plenoptic function. In ECCV (1), volume 12346 of
Lecture Notes in Computer Science, pages 178–196. Springer, 2020. 2
[39] Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick.
Microsoft COCO: common objects in context. In ECCV (5), volume 8693 of Lecture Notes in Computer Science, pages 740–755.
Springer, 2014. 7
[40] Andrew Liu, Richard Tucker, Varun Jampani, Ameesh Makadia, Noah Snavely, and Angjoo Kanazawa. Infinite nature: Perpetual
view generation of natural scenes from a single image. CoRR, abs/2012.09855, 2020. 2, 4, 6, 7, 8, 12, 17
[41] Miaomiao Liu, Xuming He, and Mathieu Salzmann. Geometry-aware deep network for single-image novel view synthesis. In CVPR,
pages 4616–4624. IEEE Computer Society, 2018. 2
[42] Ilya Loshchilov and Frank Hutter. SGDR: stochastic gradient descent with warm restarts. In ICLR (Poster). OpenReview.net, 2017.
11
[43] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In ICLR (Poster). OpenReview.net, 2019. 11
[44] Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: A continuous relaxation of discrete random variables.
In ICLR (Poster). OpenReview.net, 2017. 3
[45] Moustafa Meshry, Dan B. Goldman, Sameh Khamis, Hugues Hoppe, Rohit Pandey, Noah Snavely, and Ricardo Martin-Brualla.
Neural rerendering in the wild. In CVPR, pages 6878–6887. Computer Vision Foundation / IEEE, 2019. 2
[46] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing
scenes as neural radiance fields for view synthesis. In ECCV (1), volume 12346 of Lecture Notes in Computer Science, pages
405–421. Springer, 2020. 1, 2
[47] Simon Niklaus and Feng Liu. Softmax splatting for video frame interpolation. In CVPR, pages 5436–5445. IEEE, 2020. 4
[48] Simon Niklaus, Long Mai, Jimei Yang, and Feng Liu. 3d ken burns effect from a single image. ACM Trans. Graph., 38(6):184:1–
184:15, 2019. 2
[49] Ankur P. Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. A decomposable attention model for natural language
inference. In EMNLP, pages 2249–2255. The Association for Computational Linguistics, 2016. 3
[50] Eunbyung Park, Jimei Yang, Ersin Yumer, Duygu Ceylan, and Alexander C. Berg. Transformation-grounded image generation
network for novel 3d view synthesis. In CVPR, pages 702–711. IEEE Computer Society, 2017. 2
[51] Eric Penner and Li Zhang. Soft 3d reconstruction for view synthesis. ACM Trans. Graph., 36(6):235:1–235:11, 2017. 2
[52] Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask
learners. 2019. 3, 11
[53] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot
text-to-image generation. CoRR, abs/2102.12092, 2021. 3
[54] René Ranftl, Katrin Lasinger, David Hafner, Konrad Schindler, and Vladlen Koltun. Towards robust monocular depth estimation:
Mixing datasets for zero-shot cross-dataset transfer. IEEE Trans. Pattern Anal. Mach. Intell., 2020. 2, 4, 7, 9, 12
[55] Ali Razavi, Aäron van den Oord, and Oriol Vinyals. Generating diverse high-fidelity images with VQ-VAE-2. In NeurIPS, pages
14837–14847, 2019. 3
[56] Gernot Riegler and Vladlen Koltun. Free view synthesis. In ECCV (19), volume 12364 of Lecture Notes in Computer Science, pages
623–640. Springer, 2020. 1, 2, 5
[57] Gernot Riegler and Vladlen Koltun. Stable view synthesis. CoRR, abs/2011.07233, 2020. 1, 2
[58] Chris Rockwell, David F. Fouhey, and Justin Johnson. Pixelsynth: Generating a 3d-consistent experience from a single image, 2021.
3

21
[59] Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P. Kingma. Pixelcnn++: Improving the pixelcnn with discretized logistic
mixture likelihood and other modifications. In ICLR (Poster). OpenReview.net, 2017. 3
[60] Johannes L. Schönberger and Jan-Michael Frahm. Structure-from-motion revisited. In CVPR, pages 4104–4113. IEEE Computer
Society, 2016. 2, 12
[61] Johannes L. Schönberger, Enliang Zheng, Jan-Michael Frahm, and Marc Pollefeys. Pixelwise view selection for unstructured multi-
view stereo. In ECCV (3), volume 9907 of Lecture Notes in Computer Science, pages 501–518. Springer, 2016. 2
[62] Jonathan Shade, Steven J. Gortler, Li-wei He, and Richard Szeliski. Layered depth images. In SIGGRAPH, pages 231–242. ACM,
1998. 2
[63] Meng-Li Shih, Shih-Yang Su, Johannes Kopf, and Jia-Bin Huang. 3d photography using context-aware layered depth inpainting. In
CVPR, pages 8025–8035. IEEE, 2020. 2, 4, 7, 8, 12, 15, 17
[64] Pratul P. Srinivasan, Richard Tucker, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng, and Noah Snavely. Pushing the boundaries of
view extrapolation with multiplane images. In CVPR, pages 175–184. Computer Vision Foundation / IEEE, 2019. 2
[65] Pratul P. Srinivasan, Tongzhou Wang, Ashwin Sreelal, Ravi Ramamoorthi, and Ren Ng. Learning to synthesize a 4d RGBD light
field from a single image. In ICCV, pages 2262–2270. IEEE Computer Society, 2017. 2
[66] Shao-Hua Sun, Minyoung Huh, Yuan-Hong Liao, Ning Zhang, and Joseph J. Lim. Multi-view to novel view: Synthesizing novel
views with self-learned confidence. In ECCV (3), volume 11207 of Lecture Notes in Computer Science, pages 162–178. Springer,
2018. 2
[67] Richard Szeliski and Polina Golland. Stereo matching with transparency and matting. Int. J. Comput. Vis., 32(1):45–61, 1999. 2
[68] Maxim Tatarchenko, Alexey Dosovitskiy, and Thomas Brox. Multi-view 3d models from single images with a convolutional network.
In ECCV (7), volume 9911 of Lecture Notes in Computer Science, pages 322–337. Springer, 2016. 2
[69] Lucas Theis, Aäron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. In ICLR, 2016. 6
[70] Justus Thies, Michael Zollhöfer, and Matthias Nießner. Deferred neural rendering: image synthesis using neural textures. ACM
Trans. Graph., 38(4):66:1–66:12, 2019. 2
[71] Richard Tucker and Noah Snavely. Single-view view synthesis with multiplane images. In CVPR, pages 548–557. IEEE, 2020. 2
[72] Aäron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. Neural discrete representation learning. In NIPS, pages 6306–6315,
2017. 3, 4
[73] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin.
Attention is all you need. In NIPS, pages 5998–6008, 2017. 3, 4, 11
[74] Zhou Wang, Alan C. Bovik, Hamid R. Sheikh, and Eero P. Simoncelli. Image quality assessment: from error visibility to structural
similarity. IEEE Trans. Image Process., 13(4):600–612, 2004. 5, 6
[75] Olivia Wiles, Georgia Gkioxari, Richard Szeliski, and Justin Johnson. Synsin: End-to-end view synthesis from a single image. In
CVPR, pages 7465–7475. IEEE, 2020. 1, 2, 7, 12, 15
[76] Junyuan Xie, Ross B. Girshick, and Ali Farhadi. Deep3d: Fully automatic 2d-to-3d video conversion with deep convolutional neural
networks. In ECCV (4), volume 9908 of Lecture Notes in Computer Science, pages 842–857. Springer, 2016. 2
[77] Zexiang Xu, Sai Bi, Kalyan Sunkavalli, Sunil Hadap, Hao Su, and Ravi Ramamoorthi. Deep view synthesis from sparse photometric
images. ACM Trans. Graph., 38(4):76:1–76:13, 2019. 2
[78] Jimei Yang, Scott E. Reed, Ming-Hsuan Yang, and Honglak Lee. Weakly-supervised disentangling with recurrent transformations
for 3d view synthesis. In NIPS, pages 1099–1107, 2015. 2
[79] Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as
a perceptual metric. In CVPR, pages 586–595. IEEE Computer Society, 2018. 5, 6
[80] Tinghui Zhou, Matthew Brown, Noah Snavely, and David G. Lowe. Unsupervised learning of depth and ego-motion from video. In
CVPR, pages 6612–6619. IEEE Computer Society, 2017. 2
[81] Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, and Noah Snavely. Stereo magnification: learning view synthesis using
multiplane images. ACM Trans. Graph., 37(4):65:1–65:12, 2018. 2, 6, 12
[82] Tinghui Zhou, Shubham Tulsiani, Weilun Sun, Jitendra Malik, and Alexei A. Efros. View synthesis by appearance flow. In ECCV
(4), volume 9908 of Lecture Notes in Computer Science, pages 286–301. Springer, 2016. 2

22

You might also like