More About Me...

Lorem ipsum dolor sit amet, nisl elit viverra sollicitudin phasellus eros, vitae a mollis. Congue sociis amet, fermentum lacinia sed, orci auctor in vitae amet enim. Ridiculus nullam proin vehicula nulla euismod id. Ac est facilisis eget, ligula lacinia, vitae sed lorem nunc. Orci at nulla risus ullamcorper arcu. Nunc integer ornare massa diam sollicitudin.

Another Tit-Bit...

Lorem ipsum dolor sit amet, nisl elit viverra sollicitudin phasellus eros, vitae a mollis. Congue sociis amet, fermentum lacinia sed, orci auctor in vitae amet enim. Ridiculus nullam proin vehicula nulla euismod id. Ac est facilisis eget, ligula lacinia, vitae sed lorem nunc.

3D camera sensor research from Stanford

Researchers at Stanford University have developed an innovative camera lens that captures depth perception in each shot.  The system, called multi-aperture, uses a 3-megapixel sensor to capture 1616 pixel squares called subarrays, each slightly overlapping; image processing then analyses the pixel location differences between subarrays to work out the relative distance between objects in the photo.  At present the 3D information is stored as metadata in a normal JPEG.

Stanford multiarray 3D photo

 

Stanford multiarray 3D photo

The system could also lead to a reduction in noise, which particularly plagues the generally lower-ISO capable cameras found in cellphones, and a streamlining in design and build.  Each chip could potentially be smaller than existing models, thanks to fast advancing chip manufacture technology:

"There is opportunity for most of the complexity of the lens design to sit at the semiconductor rather than at the objective lens.  Although the local optics [on the sensor] may be challenging, it is possible that the optics can be better controlled with lithography and semiconductor processes than with the injection molding and grinding that is used in the conventional camera lenses" Keith Fife, Stanford University

At present the multiarray camera uses ten times the battery draw of a normal lens, due to the extra processing needed to fathom out the depth data; it also reduces overall megapixel count, because of the overlapping subarrays.  I'm still excited to see this sort of technology hit cellphones, though; it's almost like micro-geotagging, adding position data not just to the image as a whole (i.e. where you took it) but even more specifically where individual things were in the shot.

0 comments:

Post a Comment