How Pixel’s Super Res Zoom works
Super Res Zoom launched on Pixel phones with the Pixel 3, and it's only gotten better over the years — in fact, I recently realized how incredibly useful it is while shooting Yosemite’s Firefall phenomenon, which happens when a specific angle of the setting sun makes an already stunning waterfall look lit from within. But because I couldn’t get terribly close, I needed to rely on my Pixel 7 Pro’s Super Res Zoom feature.
“Pixel’s approach to zoom is one that combines state-of-the-art hardware, a bunch of awesome software and then a lot of AI on top of that,” says Alexander Schiffhauer, a Pixel Camera & AI Group Product Manager. “This means the quality works well throughout a range of zoom settings — not just one specific setting, like 5x or 10x.” Here are a few ways hardware, software and AI come together to make Super Res Zoom.
Hardware
While all Pixel 3 and newer phones have Super Res Zoom, I was using the Pixel 7 Pro on my recent trip, and Alex explained why the camera’s architecture was so useful for me. “Pixel 7 Pro has a pro-level camera, which is actually composed of three rear cameras: a .5x ultrawide camera, a 1x camera — the main camera when you open the camera app — and a 5x telephoto camera,” he says. A telephoto camera is one you can use to zoom in on subjects in the distance, and Pixel’s kicks in once you zoom anywhere between 5x and 30x.
All of that zooming potential comes in a surprisingly small package, especially compared to typical telephoto camera lenses, which are pretty large. To capture images from far away, you need a lot of light, bent in the right way. Traditional telephoto cameras use a stack of lenses to accomplish this — those lenses capture a lot of light and then bend it just so before it hits a sensor in the camera. The sensor's job is to take that light and turn it into a signal that the phone’s software reads and translates into photo pixels you see on the phone’s screen.
You’ve likely noticed that the Pixel 7 Pro doesn’t have a gigantic, tube-like lens that sticks out, though. “That's because we used a mirror that allowed us to stack our lenses sideways, and we built that horizontally right into the camera bar,” Alex says. The light enters the camera and hits the mirror, which redirects the light horizontally toward the sensor. Once that light hits the sensor, software and AI algorithms start doing their work.
First introduced with Pixel 6 Pro, Pixel’s folded telephoto camera uses a
prism to bend light 90 degrees.
Software
Here’s something important to understand about how Super Res Zoom works: When you zoom in and take a photo, your phone is actually adapting to your zoom range and capturing multiple images at roughly the same time. After you press the shutter button, the software and machine learning algorithms work together to create the best version of all those image captures.
On the software front, the phone’s telephoto camera uses HDR+ with Bracketing to merge images taken at different exposures. This process gets you the best lighting and detail in one photo instead of sacrificing one for the other. And for ease of use, HDR+ with Bracketing automatically turns on when you use any level of zoom — the telephoto lens takes those multiple images almost instantaneously, so quickly you don’t notice and don’t have to prompt it.
Another important piece of software for Super Res Zoom is “remosaicing,” which works with HDR+ with Bracketing. “By cropping into the inner portion of Pixel 7 Pro’s 48-megapixel sensor, Super Res Zoom can output 12-megapixel photos at 10x,” Alex explains. Those 12 megapixel photos at 10x are noisier than before, though, and they’re also in a different sensor format. So a new algorithm called remosaicing converts that sensor data into something HDR+ with Bracketing can work with to reduce the amount of noise in the final output. “So you get the best of both worlds: high resolution and low noise,” Alex says.
Machine learning
Finally, a handful of machine learning algorithms contribute to Super Res Zoom. One example is Fusion Zoom, an algorithm that aligns and merges images from multiple cameras, ensuring that your photos still look great when they’re taken in between your main camera and telephoto camera (for example, if you go from, say 2x to 5x zoom).
The main camera and the telephoto camera work together to give you crisp, detailed photos.
Another algorithm that helps with Super Res Zoom is Zoom Stabilization, which fights potential shakiness as you frame your shot. “At 15x and beyond, you’ll notice a little mini map that shows up at the top right corner of your screen,” Alex says. “It stabilizes your subject, because it can be really hard to compose a photo on your phone when you’re zoomed in far — the tiniest hand movements can throw your camera off.” And after 20x zoom, Pixel 7 Pro uses a new ML upscaler that includes a neural network to enhance the detail of your photos. The more you zoom in, the more the telephoto camera leans into AI.
Each of these three areas couldn’t work without the others to create Super Res Zoom — and that’s what Alex appreciates about it. “It’s this deep integration across these areas of hardware, software, and AI that allows us to give you those photos,” he says. “You can zoom in confidently to get whatever perspective you want, and you can know your photos are going to come out beautifully.”