Focus images with Adobe photography

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Focus images instantly with Adobe?s computational photography



Dave Story demonstrates the only prototype of
Adobe's 3D camera lens, part of the company's
newest computational photography technique.

Adobe has recently unveiled some novel photo editing abilities with a new
technology known as computational photography. With a combination of a
special lens and computer software, the technique can divide up a camera
image in different views and reassemble them with a computer.
The method uses a lens embedded with 19 smaller lenses and prisms, like an
insect?s compound eye, to capture a scene from different angles at the same
time. As Dave Story, Vice President of Digital Imaging Product Development
at Adobe, explained, this lens can determine the depth of every pixel in
the scene.

This means that, after the photo is taken and transferred to a computer,
people can edit certain layers of the photo within seconds. If a user wants
to eliminate the background, the new software can simply erase everything
in the image that appears at or beyond a certain distance.

Further, people can use a 3D focus brush to ?reach into the scene and
adjust the focus,? Story explained during a news conference, in a video
posted by Audioblog.fr. At the conference, he uses the focus brush to bring
a blurry statue in the foreground of an image into focus simply by dragging
the tool over the area on the image. Alternatively, he switched to a
de-focus brush to bring a second statue located further back in the image
out of focus.

?This is something you cannot due with a physical camera,? he said. ?There?
s no way to take a picture with just this section in focus and everything
else out of focus. It?s not physically possible to make a camera that does
that. But with a combination of that lens and your digital dark room, you
have what we call computational photography. Computational photography is
the future of photography.?

Knowing the 3D nature of every pixel also enables people to view photos
from different angles after they are taken, which Story demonstrated.
Months after a photo is snapped, people can ?move the camera? as if
travelling through a scene in Google Earth. Story suggested that this
ability would be useful if background objects were accidentally aligned in
undesirable positions, such as a lamp post appearing to stick straight out
of a person?s head. In that case, you could rotate the image slightly to
one side, in order to view the scene from a different angle.

?We can do things that people now have to do manually, much more easily,?
Story said. ?But we can also use computational photography to allow you to
accomplish physically impossible results.?



this is the future??

reviewing: ?This is something you cannot due with a physical camera,? he
said. ?There?s no way to take a picture with just this section in focus and
everything else out of focus. It?s not physically possible to make a camera
that does that.."


I beg to differ


however if we were talking about producing images such that on a receding
scale, point A and point F were in focus while the elements between were
not then it may have some interesting applications.. might be hard to view
such images though..



and this statement - ?We can do things that people now have to do manually,
much more easily?.

I'm sorry.. I think if it gets to the point where it's 'easier' to spend
thousands of dollars for me to  *focus*  or shuffle my feet a few inches,
then they've won and I truly would have become the mindless zombie consumer
the corporate machines want me to be.


http://www.physorg.com/news111141405.html


k





[Index of Archives] [Share Photos] [Epson Inkjet] [Scanner List] [Gimp Users] [Gimp for Windows]

  Powered by Linux