Notes:
The Wikipedia article defines the bilinear blended Coons patch
bounded by the
four curves ,
,
, and
as
For SVG mesh gradients the four curves are cubic Bézier curves.
The inkscape wiki states that color blending inside the patch is bilinear. (I have not
found where this is specified in Advanced Gradients for SVG ) . So presumably given
colors
for the four corners we can compute a color within the patch using the formula
|
| (5) |
My goal is to develop automated software for creating a mesh gradient which best
matches a masked source image. There has already been research in this
area (Image Vectorization using Optimized Gradients for Ferguson patches)
.
A reasonable goal is to minimize the error
|
| (6) |
which might be an oversimplification since
is
sampling an RGB image. (What if we tried to optimize in HSV space?)
The Jian Sun et al paper identifies this as a non-linear least
squares problem and the paramters include color parameters
, and the space parameters are the control points for
,
,
, and
(which I
will label ).
They suggest a Levenberg-Marquardt algorithm which depends upon computing a
Jacobian (with which I am not very familiar). Maybe it is a Jacobian instead of a
gradient because the image has red/green/blue channels.
In any case, I am going to have to figure out how to compute formulas for things
like
and evaluate them over the entire image. (I use
to represent a
chroma element like ,
, or
; and
stands in for a geometric
component like
or .)
Computing these partial differentials is relatively straightforward if we iterate
over
and .
Now that we have an example of the partial differential with respect to one of the
chroma parameters we can ask ourselves: do we really want to calculate the error as the
sum over
and ?
I feel it more appropriate to calculate it as the sum over
and
. That then raises
the question: should our algorithm iterate over the relevant pixels of the image and calculate
the correct
and to give
each and
? Or should it
instead iterate over
and and scale each
element of the sum by ,
,
, and
as appropriate? The
nature of the formula
especially concerns me since
and I’m not absolutely confident of the proper formulation of
for a
specific .
(Now that I have reacquainted myself with integration by
substitution for multiple variables) One option is to sum over
and
and use the determinant of the Jacobian to scale the result to
&
space. This gives us
a new formula for .
Let’s find out what that Jacobian determinant will look like:
And after all that I need a vacation.
will be
just as tortuous and have a similar structure, although the structure of the formulas that
fall out of
and
will be swapped.
The source image will be a discrete matrix of RGB triples.
will often not
give integer
and
values, so some form of interpolation will be necessary. In images with noise (any
photograph, or even a JPEG of a smooth computer painting) it is reasonable
to perform some smoothing which might synergize with the interpolation
nicely.
Computing geometric component
is somewhat more complicated than the chroma components.
That leaves .
One tactic for computing this would be to use the derivative of the
bilinear interpolation of the samples on the boundary of the cell that
falls
within, but that definition of the derivative is discontinous at cell boundaries (where
or
).
Upgrading to a bicubic might be an option when I stop being lazy and figure out the
related math. Then again, let’s be super-lazy and apply a Gaussian blur to the
image!
Let us call the original image
over integer
and
coordinates. Then
|
| (105) |
In practice the algorithm does not need to iterate over all of
for every
. A radius of
a couple of
is enough to capture the dominant elements (because outside that radius the exponential
term scales the values to inconsequentiality). Although I am torn on if it is a good idea to
replace the
denominator with
|
| (112) |