I propose an idea about a theoretical imaging sensor that features adaptive frameless sampling schema. At the moment it probably isn't technologically feasible but who knows, in a few years it could be already practical.
Main features
Primary idea of Adaptive Sampling Sensor (or ASS?) is that it should sample each image sensor element temporally unrelated to other elements. Data from each element should be collected then and only then if collected charge rises above determined noise floor. This means that element has gathered enough light to actually contain any useful information. Second important feature should be that element does not output data if change between two samples has been smaller than determined value. Pre-requisite for implementing these features is that the time from last data output must be constantly measured because this allows to calculate the "average" value for that time span. Without measuring the time for each element, more time and less light would give the same value as less time and more light.
This theoretical sensor would send data out only when: 1) enough photons have been gathered to rise above noise floor; 2) there has been a significant change in element value. Each element "fires" unrelated to other elements and is "adaptive" to incoming light levels. No light = no changed data = no output.
ASS type sensor mimicks the way eyes work. Signal sampling is unrelated to other cells and signal is generated only when there is change. Human eyes must constantly jitter in order to refresh the image because without change, cells refuse to send signals to brain. Reptilians, who don't have the jitter feature, only see moving objects. To see the whole surround they must constantly move their eyes or themselves.
Main benefits
Main benefits of ASS type sensor are channeling data flow to where it is most needed. For the same bandwidth, more useful information can be gathered. As sampling is not temporally fixed, more samples can be gathered for moving objects and thus reduce stutter and/or motion blur. As fixed temporal sampling goes out of the window, so do frames. Frames (or better to say individual slices of time) can be reconstructed from the feed but it is not necessary if theoretical display supports rendering of temporally adaptive data.
This scheme resembles in some ways the way some video codecs work. Main difference is that codec algorithms can only be applied to already temporally sampled (frame-based) data. This means that data throughput bottleneck happens before encoding and only benefit is reduced storage need. Applying data discarding straight on sensor means that data feed is either reduced or contains relatively more useful information. Raw sensor feed is similar to constant quality variable data rate encoding logic where no data means no storage space wasted.
Tuesday, October 30, 2012
Wednesday, October 24, 2012
Time to go frameless?
For more than a hundred years, the idea of cinema has been fixed on sampling moments of time. Technical advances have made it possible to increase the sampling frequency but the idea is still the same. When comes the time to leave fixed quantization behind and step up with cinema 2.0 as it could be called. The analogue with video codecs could maybe describe the idea: constant size and variable quality vs. variable size and constant quality. At the moment cinema is fixed on constant size and variable quality domain because of strict temporal sampling and constant frame size but the quality of image degrades every time we move the camera or point it on something that moves. Motion blur steals information from us and there is no way to get it back.
Cinema 2.0
Second generation cinema would mean that we take a step away from fixed sampling and make a move into the world of adaptive sampling and constant quality. Eliminating the baked-in motion blur would be the first objective. Let our eyes do the work and decide when visuals move too fast to pick details. World doesn't render motion blur for our eyes, our brain does it. And so should it be.
Cinema 2.0
Second generation cinema would mean that we take a step away from fixed sampling and make a move into the world of adaptive sampling and constant quality. Eliminating the baked-in motion blur would be the first objective. Let our eyes do the work and decide when visuals move too fast to pick details. World doesn't render motion blur for our eyes, our brain does it. And so should it be.
Thursday, October 18, 2012
Interesting times ahead
One thing ends, another begins. In the words of Dave Stewart: "Now show me frame two"
Trying to wrap my head around the question, whether there is any potential in starting a company that specializes in some specific supporting areas for vfx. There are companies that do that but none that I am aware of in the scandinavia or baltic states. Probably I just don't know them. It seems that most companies try to be end-to-end solution to vfx or animation which has it's advantages but also shortcomings. To specialize on some specific time-consuming piece of image manipulation takes both time, opportunity and practice. And there is a very practical limit that comes from the number of people working in the company - the less people, the more generalists they have to be if end-to-end workflow is the goal. That in turn equals lower efficiency and quality.
Two main problems seem to be lack of specialists in the area and possible lack of work due to not being an end-to-end facility. First problem could be solved by slowly building a capable core team of skilled specialists by investing into learning and knowledge. Second is a bit more difficult but with fast, cost-effective and high-quality work, existing post and vfx houses might see the that outsourcing some self-contained parts of work can leverage the quality of their work and let them do what THEY do best.
Keeping a highly specialized artist in the team is not cost-effective for most small(er) vfx houses. It works with some fields (modellers, animators) but not so well for others. Hiring freelancers for specific projects can solve the problem but freelancers need to be coordinated, workload distributed and so on. Not too bright either. Doing things by assigning team members to special work they are not too comfortable with usually results in 3x more time=budget spent and mediocre results. Plus it keeps them away from doing things they do better. In the end, this might even make the company avoid projects that require some special work or thy to come up with different workarounds that aren't necessarily any cheaper or less time consuming.
I see some light in the end of the tunnel but it is yet difficult to tell if it is a choo-choo train or bright future.
Trying to wrap my head around the question, whether there is any potential in starting a company that specializes in some specific supporting areas for vfx. There are companies that do that but none that I am aware of in the scandinavia or baltic states. Probably I just don't know them. It seems that most companies try to be end-to-end solution to vfx or animation which has it's advantages but also shortcomings. To specialize on some specific time-consuming piece of image manipulation takes both time, opportunity and practice. And there is a very practical limit that comes from the number of people working in the company - the less people, the more generalists they have to be if end-to-end workflow is the goal. That in turn equals lower efficiency and quality.
Two main problems seem to be lack of specialists in the area and possible lack of work due to not being an end-to-end facility. First problem could be solved by slowly building a capable core team of skilled specialists by investing into learning and knowledge. Second is a bit more difficult but with fast, cost-effective and high-quality work, existing post and vfx houses might see the that outsourcing some self-contained parts of work can leverage the quality of their work and let them do what THEY do best.
Keeping a highly specialized artist in the team is not cost-effective for most small(er) vfx houses. It works with some fields (modellers, animators) but not so well for others. Hiring freelancers for specific projects can solve the problem but freelancers need to be coordinated, workload distributed and so on. Not too bright either. Doing things by assigning team members to special work they are not too comfortable with usually results in 3x more time=budget spent and mediocre results. Plus it keeps them away from doing things they do better. In the end, this might even make the company avoid projects that require some special work or thy to come up with different workarounds that aren't necessarily any cheaper or less time consuming.
I see some light in the end of the tunnel but it is yet difficult to tell if it is a choo-choo train or bright future.
Monday, October 1, 2012
3Delight and Blender
That AtomKraft thing for AE didn't really cut it afterall. Maybe I need some more time to figure out how all things work in this horrible mess that AE is for such work.
But 3Delight exporter for Blender, coded by Matt Ebb, is still great! Fiddled with it some more and compiled a new glass shader because the example shaders for glass are.. not very glass-looking. For some reason Blender now can't load shader names in existing scenes, displays text "loading shaders...". When I start Blender up and add material in new scene, everything is fine. Must investigate what is wrong.
Overall, 3Delight is a whole new world. Displacement is great and it should support pTex also. Should download Mari trial or some pTex sample files and try to get pTextures working. Shader writing is interesting, tried to figure out what I must do to compile C++ DSO functions. It seems that it isn't very difficult with cmake. Rendertime Python code evaluation is also something that can be very useful, for example for creating new geometry.
pTex seems to have catched wind very well with prMan, 3Delight, Arnold, vRay, Mari, 3D-Coat, Mudbox, 3DS Max, Maya, Modo, Nuke already supporting it and the list goes longer every day. Hope that Blender also adds pTex support to Cycles renderer and it's painting tools.
But 3Delight exporter for Blender, coded by Matt Ebb, is still great! Fiddled with it some more and compiled a new glass shader because the example shaders for glass are.. not very glass-looking. For some reason Blender now can't load shader names in existing scenes, displays text "loading shaders...". When I start Blender up and add material in new scene, everything is fine. Must investigate what is wrong.
Overall, 3Delight is a whole new world. Displacement is great and it should support pTex also. Should download Mari trial or some pTex sample files and try to get pTextures working. Shader writing is interesting, tried to figure out what I must do to compile C++ DSO functions. It seems that it isn't very difficult with cmake. Rendertime Python code evaluation is also something that can be very useful, for example for creating new geometry.
pTex seems to have catched wind very well with prMan, 3Delight, Arnold, vRay, Mari, 3D-Coat, Mudbox, 3DS Max, Maya, Modo, Nuke already supporting it and the list goes longer every day. Hope that Blender also adds pTex support to Cycles renderer and it's painting tools.
Labels:
3Delight
Subscribe to:
Posts (Atom)