Tuesday, October 30, 2012

Theoretical camera sensor - with adaptive sampling

I propose an idea about a theoretical imaging sensor that features adaptive frameless sampling schema. At the moment it probably isn't technologically feasible but who knows, in a few years it could be already practical.

Main features
Primary idea of Adaptive Sampling Sensor (or ASS?) is that it should sample each image sensor element temporally unrelated to other elements. Data from each element should be collected then and only then if collected charge rises above determined noise floor. This means that element has gathered enough light to actually contain any useful information. Second important feature should be that element does not output data if change between two samples has been smaller than determined value. Pre-requisite for implementing these features is that the time from last data output must be constantly measured because this allows to calculate the "average" value for that time span. Without measuring the time for each element, more time and less light would give the same value as less time and more light.

This theoretical sensor would send data out only when: 1) enough photons have been gathered to rise above noise floor; 2) there has been a significant change in element value. Each element "fires" unrelated to other elements and is "adaptive" to incoming light levels. No light = no changed data = no output.

ASS type sensor mimicks the way eyes work. Signal sampling is unrelated to other cells and signal is generated only when there is change. Human eyes must constantly jitter in order to refresh the image because without change, cells refuse to send signals to brain. Reptilians, who don't have the jitter feature, only see moving objects. To see the whole surround they must constantly move their eyes or themselves.

Main benefits
Main benefits of ASS type sensor are channeling data flow to where it is most needed. For the same bandwidth, more useful information can be gathered. As sampling is not temporally fixed, more samples can be gathered for moving objects and thus reduce stutter and/or motion blur. As fixed temporal sampling goes out of the window, so do frames. Frames (or better to say individual slices of time) can be reconstructed from the feed but it is not necessary if theoretical display supports rendering of temporally adaptive data.

This scheme resembles in some ways the way some video codecs work. Main difference is that codec algorithms can only be applied to already temporally sampled (frame-based) data. This means that data throughput bottleneck happens before encoding and only benefit is reduced storage need. Applying data discarding straight on sensor means that data feed is either reduced or contains relatively more useful information. Raw sensor feed is similar to constant quality variable data rate encoding logic where no data means no storage space wasted.

No comments:

Post a Comment