Wednesday, September 24, 2014
Nuke 9 is pretty impressive!
I'm doing some projection based cleanup work and crunching through 4K exr plates is pretty slow with Nuke 8. Opened the same comp in Nuke 9 beta and voila, whole thing just flies! Not sure if it is the multi-threaded exr reading or improvements to scanlinerender but speed difference is definitely multiple times. Great work Foundry team!
Labels:
compositing,
Nuke
Tuesday, September 16, 2014
Nuke - smart cloning with time offset
Just a little snippet I'm using right now. If you need to clone from a certain area in plate but don't want to stabilize it and use a tracker as clone transform base instead, changing time offset of clone source shifts the area you are cloning from. To keep the same relative clone source area while changing time offset, use this expression in clone shape or stroke source transform:
Tracker1.tracks.3.track_x-Tracker1.tracks.3.track_x(frame+source_time_offset)
It allows easy changes in time offset and still clones from the same relative location.
Tracker1.tracks.3.track_x-Tracker1.tracks.3.track_x(frame+source_time_offset)
It allows easy changes in time offset and still clones from the same relative location.
Labels:
cloning,
expressions,
Nuke
Wednesday, July 23, 2014
Nuke get(channels=0xf) but request(channels=0x7) error
I'm doing some paint fix work and Nuke is getting pretty annoying with its get(channels=0xf) but request(channels=0x7) error. It randomly throws it on different nodes and while in viewer everything renders correctly, Write nodes crash with an error.
At first I searched the forums and Nuke release notes but found nothing very useful except the discussion about motion vector channels in The Foundry Nuke user forum. First suggestion was to copy all nodes to clean project, but this didn't help me. Another suggestion was to remove motion vectors related stuff so I turned m-vectors off on all ScanlineRender nodes. It didn't solve the problem, so I started testing different things. After some fumbling I removed all other channels except RGBA from all pipes that come from renderers and it seems that this removed the error, at least for now!
So if you get these cryptic get(channels= ....) errors, try removing all channels that are not used, especially the motionvector channels, this might fix the problem.
A little teaser also:
At first I searched the forums and Nuke release notes but found nothing very useful except the discussion about motion vector channels in The Foundry Nuke user forum. First suggestion was to copy all nodes to clean project, but this didn't help me. Another suggestion was to remove motion vectors related stuff so I turned m-vectors off on all ScanlineRender nodes. It didn't solve the problem, so I started testing different things. After some fumbling I removed all other channels except RGBA from all pipes that come from renderers and it seems that this removed the error, at least for now!
So if you get these cryptic get(channels= ....) errors, try removing all channels that are not used, especially the motionvector channels, this might fix the problem.
A little teaser also:
Monday, July 21, 2014
SanDisk Extreme PRO 240GB SSD
Bought new SSD drive to act as a cache & in-production project files drive. First impressions are that performance is pretty solid with both read and write speeds noticeably higher than 500 MB/s with all file sizes. Let's see how it performs as a cache disk for Nuke and AE.
Thursday, July 17, 2014
12 bit TIFF files in Nuke
Opened up a sequence one day and it looked very strange with false colors and some width lost. Found out that Nuke does not support 12-bit TIFF files:
At first I thought there was an issue with color encoding and tried some different stuff but as image was thinner it occurred that there was something else going on. Metadata viewer told that image has 12 bits per channel, but Nuke's tiffReader thinks it is 16-bit tiff and reads accross channel borders in byte array. This is why 1/4 of width was lost. Images originate from DVS Clipster DI soft, so they are not so rare probably.
I searched the internet for solution and made a support query to Foundry. Comes out that 12-bit tiffs are not supported yet, but the good part is that they turned it into feature request so we may see it implemented in future releases.
My workaround was to convert the sequences in AfterEffects, but as I used the "jump to Nuke" discount program with my AE serial code it is somewhat ironic.
Tuesday, July 8, 2014
Convert checkerboard lens grid to SynthEyes dot pattern
This quick tutorial is based on a question from SynthEyes user forum topic Using traditional checkerboard grids, not 'calibrated' grid. How to use the checker lens grids with SynthEyes lens calibration scripts that expect a certain dot pattern? Solution is rather easy and can be replicated in almost any compositing software.
As a starting point we have a checkerboard lens grid shot with our camera and lens of interest. It looks something like this:
Trying to use SynthEyes lens grid scripts on this one (Shot > Create Lens Grid Trackers) produces a funky, but unusable result:
As a starting point we have a checkerboard lens grid shot with our camera and lens of interest. It looks something like this:
Trying to use SynthEyes lens grid scripts on this one (Shot > Create Lens Grid Trackers) produces a funky, but unusable result:
So what can we do instead of manually positioning all the trackers for SynthEyes to crunch on? Solution is rather easy and is based on filtering techniques, specifically convolution matrices that "search" for the areas with contrasting corners.
In Nuke, the overall node setup looks like this:
We'll speak about the top and bottom nodes in a second, lets first concentrate on the four Matrix nodes and following Merge nodes first. The purpose of Matrix nodes is to introduce custom convolution matrices that have special negative lobes and thus have greater resulting values in pixels that are at the tips of high contrast corners. We need different matrix sets because the corners can be oriented different ways and with four matrices that mirror each other we get all possible situations covered.
Matrix nodes have following values:
As you can see, they are essentially the same values mirrored horizontally and vertically.
EDIT: you get more accurate results with bigger matrix that has even number of rows and columns. Bigger matrix is also not so sensitive to grid rotation. Use something like this:
Merge nodes that follow Matrix nodes are set to max operation and they take the maximum pixel values from all matrices and make up the final convoluted pattern.
Blur, Grade and Invert nodes that follow the merges are meant for making the dots a bit bigger and make them stand out more. Grade node scales and clips pixel values so that we get maximum contrast between dots and background:
Invert node inverts the resulting image so that we get black dots on white background:
This image works fine in SynthEyes and we can calculate our get our lens model. Initial tracker layout:
Solved lens:
If convolution does not give expected result, try blurring the checker image slightly. Blurring can help remove unnecessary detail and convolution still accurately finds the checker pattern corners from blurred image. Also, if image has low contrast, try grading it to increase contrast and remove problems with uneven lighting.
With extreme distortion this method might not work because it relies on strict convolution based on horizontal and vertical lines. The more rotated the pattern (near corners), the less bright the dots will be!
That's it, hope that this all makes sense and is useful for somebody. If you catch an error or have ideas or comments related to this technique, please write a comment below post or in forum topic linked at the top.
Subscribe to:
Posts (Atom)