As a starting point we have a checkerboard lens grid shot with our camera and lens of interest. It looks something like this:
Trying to use SynthEyes lens grid scripts on this one (Shot > Create Lens Grid Trackers) produces a funky, but unusable result:
So what can we do instead of manually positioning all the trackers for SynthEyes to crunch on? Solution is rather easy and is based on filtering techniques, specifically convolution matrices that "search" for the areas with contrasting corners.
In Nuke, the overall node setup looks like this:
We'll speak about the top and bottom nodes in a second, lets first concentrate on the four Matrix nodes and following Merge nodes first. The purpose of Matrix nodes is to introduce custom convolution matrices that have special negative lobes and thus have greater resulting values in pixels that are at the tips of high contrast corners. We need different matrix sets because the corners can be oriented different ways and with four matrices that mirror each other we get all possible situations covered.
Matrix nodes have following values:
As you can see, they are essentially the same values mirrored horizontally and vertically.
EDIT: you get more accurate results with bigger matrix that has even number of rows and columns. Bigger matrix is also not so sensitive to grid rotation. Use something like this:
Merge nodes that follow Matrix nodes are set to max operation and they take the maximum pixel values from all matrices and make up the final convoluted pattern.
Blur, Grade and Invert nodes that follow the merges are meant for making the dots a bit bigger and make them stand out more. Grade node scales and clips pixel values so that we get maximum contrast between dots and background:
Invert node inverts the resulting image so that we get black dots on white background:
This image works fine in SynthEyes and we can calculate our get our lens model. Initial tracker layout:
If convolution does not give expected result, try blurring the checker image slightly. Blurring can help remove unnecessary detail and convolution still accurately finds the checker pattern corners from blurred image. Also, if image has low contrast, try grading it to increase contrast and remove problems with uneven lighting.
With extreme distortion this method might not work because it relies on strict convolution based on horizontal and vertical lines. The more rotated the pattern (near corners), the less bright the dots will be!
That's it, hope that this all makes sense and is useful for somebody. If you catch an error or have ideas or comments related to this technique, please write a comment below post or in forum topic linked at the top.