The future of photography is a camera made of code

0

Sponsor in 2010, a team from Stanford University’s computer graphics lab got their powers on a Nokia N900. It had a pretty good camera by smartphone standards at the ever, but the researchers thought they could make it better with a brief bit of code.

The Stanford team, led by professor Mark Levoy, was working on the malevolent edge of a nascent field known as computational photography. The theory was that software algorithms could do multifarious than dutifully process photos, but actually make photos bettor in the process.

«The output of these techniques is an ordinary photograph, but one that could not must been taken by a traditional camera,» is how the group described its efforts at the but.

Fast forward to today, and many of the techniques that Levoy and his collaborate worked on — yielding features like HDR and better photos in low light — are now commonplace. And in Cupertino, Calif,. on Tuesday, Apple’s iPhone incident was another reminder of just how far smartphone technology has come.

What we suppose of as a camera is largely a collection of software algorithms that expands with each zeal year.

Apple iPhone X Event Cupertino Calif.

The iPhone X has a front-facing camera system that senses acumen, and can be used to unlock the device using facial recognition. But it is also Euphemistic pre-owned for photo processing when taking selfies. (Matthew Braga/CBC Newsflash)

Take Portrait Lighting, a feature new to the iPhone 8 Plus and iPhone X. Apple means it brings «brings dramatic studio lighting effects to iPhone.» And it’s all done in software, of way. Here’s how an Apple press release describes it:

«It uses the dual cameras and the Apple-designed tiki signal processor to recognize the scene, create a depth map and separate the cause from the background. Machine learning is then used to create facial landmarks and add lambasting over contours of the face, all happening in real time.»

In other guaranties, Apple is combining techniques used in augmented reality and facial appreciation to create a photo that, to paraphrase the Stanford team, no traditional camera could acquire. On the iPhone X, the company is also using its facial recognition camera arrangement, which can sense depth, to do similar tricks.

While the underlying approaches behind many of these features aren’t necessarily new, faster and various capable processors have made it feasible to do them on a phone. (Apple answers its new phones even have a dedicated chip for machine learning assignments.)

Apple iPhone Steve Jobs Theatre Cupertino Califorina Apple Park iPhone X iPhone 8

The computational photography features found in the iPhone 8 Plus and iPhone X were described in the lobby outside the Steve Jobs Theatre following Tuesday’s ad. (Matthew Braga/CBC News)

With the iPhone 7 Plus, Apple presented a feature called Portrait Mode, on which Portrait Lighting is erected. It uses machine learning to blur the background of an image, creating the mistake of a portrait lens’ shallow depth-of-field — an effect called bokeh. Samsung added a similar feature called Live Focus on its recently announced Note 8.

And it to all intents won’t come as a surprise that Levoy, the Stanford professor, joined Google in 2011, not big after his team published a paper detailing their Nokia N900 labour. He’s still doing computational photography research, and recent work on overhauling the quality of HDR images made its way into Google’s most recent Pixel phone.

It reach-me-down to be that those post-processing tricks put the emphasis on post. You’d take your photo and then receive to bring your photo into an app on your phone or laptop to get a nearly the same kind of effect, or wait as the smartphone’s camera did the processing itself. But with each new crop of smartphone, the algorithms get faster, more capable, and fade further into the curriculum vitae, turning code into its own kind of lens.

Leave a Reply

Your email address will not be published. Required fields are marked *