iPhone 11 and Pixel 4 Cameras

iPhone 11 and Pixel 4 Cameras


When Apple marketing chief Phil Schiller detailed the iPhone 11's new camera abilities in September, he boasted, "It's computational photography mad science." And when Google debuts its new Pixel 4 phone on Tuesday, you can bet it'll be showing off its own pioneering work in computational photography.

The reason is simple: Computational photography can improve your camera shots immeasurably, helping your phone match, and in some ways surpass, even expensive cameras. 
But what exactly is computational photography?
In short, it's digital processing to get more out of your camera hardware -- for example, by improving color and lighting while pulling details out of the dark. That's really important given the limitations of the tiny image sensors and lenses in our phones, and the increasingly central role those cameras play in our lives.

Heard of terms like Apple's Night Mode and Google's Night Sight? Those modes that extract bright, detailed shots out of difficult dim conditions are computational photography at work. But it's showing up everywhere. It's even built into Phase One's $57,000 medium-format digital cameras.

HDR and Panoramas


One early computational photography benefit is called HDR, short for high dynamic range. Small sensors aren't very sensitive, which makes them struggle with both bright and dim areas in a scene. But by taking two or more photos at different brightness levels and then merging the shots into a single photo, a digital camera can approximate a much higher dynamic range. In short, you can see more details in both bright highlights and dark shadows. There are drawbacks. Sometimes HDR shots look artificial. You can get artifacts when subjects move from one frame to the next. But the fast electronics and better algorithms in our phones have steadily improved the approach since Apple introduced HDR with the iPhone 4 in 2010. HDR is now the default mode for most phone cameras.

Google took HDR to the next level with its HDR Plus approach. Instead of combining photos taken at dark, ordinary and bright exposures, it captured an of the dark, underexposed frame. Artfully stacking these shots together let it build-up to the correct exposure, but the approach did a better job with bright areas, so blue skies looked blue instead of washed out. 

Apple embraced the same idea, Smart HDR, in the iPhone XS generation in 2018. 

Panorama stitching, too, is a form of computational photography. Joining a collection of side-by-side shots lets your phone build one immersive, superwide image. When you consider all the subtleties of matching exposure, colors, and scenery, it can be a pretty sophisticated process. Smartphones these days let you build panoramas just by sweeping your phone from one side of the scene to the other.

Seeing in 3D

There is another major computational photography technique is seeing in 3D. Apple uses double cameras to see the world in stereo, just like you can because your eyes are a few inches apart beside this Google with only one main camera on its Pixel 3, has been used to image sensor tricks and AI algorithms to figure out how far away elements of a scene are. The huge benefit is portrait mode, the effect that shows a subject in sharp focus but blurs the background into that creamy smoothness -- "nice bokeh," in photography jargon.

It is what high-end SLRs with big, expensive lenses are famous for what SLRs do with physics, phones do with math. Initially, turn their 3D data into what's called a depth map, a version of the scene that knows how far away each pixel in the photo is from the camera. Pixels that are part of the smartphones up-close stay sharp, but pixels behind are blurred with their neighbors.

Portrait mode technology can be used for other specific purposes. It is also how Apple enables its studio lighting effect, which revamps photos so it looks like a person is standing in front of a black or white screen.

On the other hand, depth information can also help to break down a scene into segments so your smartphone can do things like better match out-of-kilter colors in shady and bright areas. Google does not do that, at least not at this time, but it is raised the idea as interesting.

Night Vision

One happy by-product of the HDR Plus approach was Night mode, introduced on the Google Pixel 3 in 2018. It used the same technology to picking a steady master photo and layering on several other frames to build one bright exposure. Apple has followed suit in 2019 with Night Sight on the iPhone 11 and 11 Pro phones. These modes are addressing a major shortcoming of phone photography: blurry or dark photos taken at bars, restaurants, parties and even ordinary indoor situations where light is scarce. In real-world photography, you can not count on sunlight.


Night modes have opened up new avenues for creative expression. They are great for urban streets-capes with neon lights, especially if you've got helpful rain to make roads reflect all the color. Night Mode can even pick out stars.

Super-Duper Resolution

There is one area where Google lagged Apple's top-end phones was zooming in to distant subjects. Apple had an extra camera with a longer focal length. On the other hand, Google used a couple of clever computational photography tricks that closed the gap.

The first is called super-resolution. It relies on a fundamental improvement to a digital camera process called demosaicing. When your camera is taking a photo, it captures only red, green or blue data for each pixel. Ther is demosaicing fills in the missing color data so each pixel has values for all three color components.

Google's Pixel 3 counted on the fact that your hands wobble when taking photos and that lets the camera figure out the true red, green and blue data for each element of the scene without demosaicing. And what better source data means Google can digitally zoom in to photos better than with the usual methods. Google calls it Super Res Zoom. In general, optical zoom, like with a zoom lens or secondary camera, produces better results than digital zoom.

Over the top of the super-resolution technique, Google added a technology called RAISR to squeeze out even more image quality. Here the company computers examined countless photos ahead of time to train an AI model on what details are likely to match coarser features. On another hand, it is using patterns spotted in other photos so the software can zoom in farther than a camera can physically.

iPhone's Deep Fusion

New with the iPhone 11 this year is Apple's Deep Fusion, a more sophisticated variation of the same multi-images approach in low to medium light. It takes four pairs of images -- four long exposures and four short -- and then one longer-exposure shot. It finds the many better combinations, analyzes the shots to figure out what kind of subject matter it should optimize for, then marries the different frames together.

The Deep Fusion highlight is the thing that incited Schiller to flaunt the iPhone 11's "computational photography mad science." But it won't arrive until iOS 13.2, which is in beta testing now.

Does Computational Photography Fall short

Computational photography is very useful, but there is a limitation of the hardware and the laws of physics still matter in photography. If you stitching it together shots into panoramas and digitally zooming is all well and good, but smartphones with cameras have a better foundation for computational photography.

That's why Apple company added new ultra-wide cameras to the iPhone 11 and 11 Pro this year and the Pixel 4 is rumored to be getting a new telephoto lens. And it is why the Huawei P30 Pro and Oppo Reno 10X Zoom have 5X "periscope" telephoto lenses.
You can do only so much with the software.

Laying The Groundwork

Computer processing launched with the very first digital cameras. It is so basic and essential that we do not even call it computational photography but it is still important, and happily, still improving.

First, there is demosaicing to fill in missing color data, a process that is very easy with uniform regions like blue skies but hard with fine detail like hair. There is a white balance in which the camera tries to compensate for things like blue-toned shadows or orange-toned incandescent lightbulbs. It is sharpening makes edges crisper, tone curves make a nice balance of dark and light shades, saturation makes colors pop, and noise reduction gets rid of the color speckles that mar images shot in dim conditions Long before the cutting-edge stuff happens, computers do a lot more work than film ever did.

This is Still Called a Photograph

In the olden era, you had to take a photo by exposing the light-sensitive film to a scene any fiddling with photos was a laborious effort in the darkroom. The digital images are far more mutable, and computational photography takes manipulation to a new level far beyond that.

Google brightens exposure to human subjects and gives them smoother skin. HDR Plus and Deep Fusion blend multiple shots of the same images and scenes. Stitched panoramas made of multiple images do not reflect a single moment in time.

So can you really call the results of computational photography photo photojournalists and forensic investigators apply more rigorous standards, but most people will probably say yes, simply because it's mostly what is your brain remembers when you tapped that shutter button

And it is smart to remember that the computational photography is used, the better of a departure your shot will be from one fleeting instant of photons traveling into a camera lens. On the other hand, computational photography is getting more important, so expect even more processing in years to come.

Post a Comment

Previous Post Next Post