Wednesday, 10 December 2014

Colours - Simple scaling and combining

So far the images have all been retrieved as greyscale, the basic sensor return format.

To support colour capture then the probes have image filters, which are placed in front of the sensor and can then be used to filter for colour, or for various chemical or non-visible wavelengths.

In the case of Voyager the two cameras each have a filter wheel, and for Cassini  each has a pair of image filters. For details on the particular Cassini filter combinations NASA have an FAQ on this topic.

For our purposes then to render colour or false colour images we use the filter information from the tag list, and then combine multiple images of a single target to generate the final image. Usually some minor position shifts are needed to align multiple images though.

There's a few ways to try and stitch the images together. Initially we'll do this manually, since most of OpenCV's stock feature detection algorithms will cause us problems on “plain circles taken with different filters” - we won't get good feature matches from some of this data. Following this we can look at piecing together a colour image of a moon, using feature detection to automate the alignment process slightly.

The first, and simplest thing we can do is to grab three images, one red, one green and one blue. Ideally these images should be taken in sequence and should all be showing the same subject in (roughly) the same position.

For this one we'll take the three files
  • "N1461106394_1.IMG"
  • "N1461106328_1.IMG"
  • "N1461106361_1.IMG
These are blue, green and red images of Saturn respectively.

Since we will have to align the images we want to handle X/Y offsets and scaling of the images relative to each other as a minimum: We actually want to undistort the image frame, but simple offsets will do for now.

To do this under open CV we use an Affine Transform.

Code-wise we provide two triangles – one from the image before the modification and the second from the image afterwards. We can then tell OpenCV to calculate the change in the points required to map from one to the other, and then apply this change to the image.


The simplest example is to think of three corners of the image. If we have the translation for any three corners then it defines the scale and repositioning of the image:

So we can do this in OpenCV with:
  • Create a new (empty) image
  • Get three of the corners of the old image
  • Get the three corners when the image has been scaled and moved
  • Calculate the transform
  • Apply it to the source image, output in the destination image.
In this case I calculated the initially transform manually, to align the rings of Saturn, and taking the (minor) offset for this case into the Red layer information.

So we create a new empty matrix with:
dst = cv::Mat::zeros(_im16.rows, _im16.cols, _im16.type());
And the three original corner points are simply: cv::Point2f st[3];
st[0] = cv::Point2f(0,0);
st[1] = cv::Point2f(_im16.cols-1, 0 );
st[2] = cv::Point2f(0,_im16.rows-1);
Following the scale the new corner point positions will be given by:

double scale;
double offsetX;
double offsetY;
cv::Point2f dt[3];
...
dt[0] = cv::Point2f(offsetX, offsetY);
dt[1] = cv::Point2f((_im16.cols*scale) +offsetX, offsetY);
dt[2] = cv::Point2f(offsetX, (_im16.rows*scale)+offsetY);

And then we can calculate the transform using getAffineTransform(), and apply it with warpAffine(), i.e.:
scalemat = getAffineTransform(st, dt);
cv::warpAffine(_im16, dst, scalemat, dst.size());
And now dst contains our scaled and shifted output. So for our three images we can define a simple structure

#define IM_BLUE (0)
#define IM_GREEN (1)
#define IM_RED (2)
typedef struct
{
   QString nm;
   int color;
   double scale;
   double xshift;
   double yshift;
}CFILES;
CFILES fileinfo[] = {
{"N1461106394_1.IMG", IM_BLUE, 1, 0, 0},
{"N1461106328_1.IMG", IM_GREEN, 1, 0, 0},
{"N1461106361_1.IMG", IM_RED, 1, -3, -1},
};
The BGR ordering is important - it's OpenCV's standard colour ordering for RGB data.
At this point we can simply walk through and decode the three images in turn, placing an appropriately scaled and shifted image in an output Mat.
i.e.: assuming we have:
RawImage* raw[3];
std::vector<cv::Mat> ch(3);
And the basic decode is:

col = fileinfo[i].color;
ch[col] = DecodeAndScale(fileinfo[i].nm);
We can now combine the three colour channels held in the channels vector using the OpenCV merge() function:


cv::Mat out;
cv::merge(ch, out);
And now the Matrix “out” contains our output image; Adding some simple scaling up the colour to use the entire dynamic range (as per the original greyscale scaling code) gets us:


This isn't perfectly aligned, but it's a proof of concept that our simple channel merge for colours is at least sane. We'd need to handle undistortion and colour correction of data to get the colours "right", but it'll do for now.