Sunday, 30 November 2014

Using OpenCV to process the images

We've hit the limits of our simple QImage based approach, so the next step is to use OpenCV, which will allow us to process and store the images without data loss.

Adding OpenCV to the project

(At least on Linux) This is simply a modification to the .pro file, using pkgconfig to provide Qt with the linking information necessary:
CONFIG += link_pkgconfig
PKGCONFIG += opencv

Getting the data into OpenCV

The basic structure for storing image data in OpenCV is cv::Mat, so we have
cv::Mat _im16;
and create our image store with
_im16 = cv::Mat(_height, _width, CV_16UC1);
The CV_16UC1 type tells OpenCV the data in the matrix is "16 bit, unsigned, single channel". At this point we can transfer data from our list of ints, and set a single point in the output with:
 _im16.at<uint16_t>(Point(x,y)) = raw->GetPixel(x,y);
Where  raw->GetPixel(x,y) is accessing data from the integer array in the existing code.
So putting this together, and adding a simple 8 bit handler (which populates an  8UC1 cv::Mat then converts it to 16UC1) we have a fragment like:
  if (_samplesz == 1)
  {
  cv::Mat im8;
    im8 = cv::Mat(_height, _width, CV_8UC1);
    for (x=0; x < _width; x++)
      for (y=0; y < _height; y++)
        im8.at<uint8_t>(Point(x,y)) = raw->GetPixel(x,y);

    im8.convertTo(_im16, CV_16UC1);
  }
  else if (_samplesz == 2)
  {
    _im16 = cv::Mat(_height, _width, CV_16UC1);
    for (x=0; x < _width; x++)
      for (y=0; y < _height; y++)
        _im16.at<uint16_t>(Point(x,y)) = raw->GetPixel(x,y);
  }
  else
  {
   // error case...
  }
At this point we scale the values we have in the matrix to use the full 16 bit range range, so we can use the OpenCV function minMaxLoc() to get the minimum and maximum values in the array, and we can scale the values with convertTo(), i.e.
  double minVal;
 double maxVal;
 double scale;
  minMaxLoc(_im16, &minVal, &maxVal,NULL, NULL);
  scale = 65535.0/(maxVal - minVal);
  _im16 -= minVal;
  _im16.convertTo(_im16, -1, scale);

(Note the scalar subtraction of minVal which reduces the lowest pixel values to 0).
Following this we can write the OpenCV image with
QString name
...
const char* nm;
  nm = name.toLocal8Bit();
  imwrite(nm, _im16);
And we can display it on the screen with the following boilerplate:
  namedWindow( "Display image", WINDOW_AUTOSIZE);
 imshow( "Display image", _im16);
 waitKey(0)
;

More Transforms

Since we now have the image in OpenCV data structures we can apply some basic transforms to the image, such as this crude despeckle to remove some of the pixel glitches, by doing a Morphological Transform pair. Although this is lossy it effectively removes the speckling; if we didn't want data loss we could apply the scaling we derive from this image to the original:
Mat out;
Mat el;

QString nm;
const char* nms;

  el = getStructuringElement(MORPH_ELLIPSE, Size(3,3));

  out = _im16.clone();
// Despeckle
  morphologyEx(out, out, MORPH_OPEN, el);
  morphologyEx(out, out, MORPH_CLOSE, el);
// Rescale image pixels
  minMaxLoc(out, &minVal, &maxVal,NULL, NULL);
  scale = 65535.0/(maxVal - minVal);
  out -= minVal;
  out.convertTo(
out, -1, scale);
  nm = "despeckle_" + QString::number(sz) +".tiff";
  nms = nm.toLocal8Bit();
  imwrite(nms, out);
We use the .clone() in the above code to avoid modifying the base image, and the result of the despeckle and rescale in the output file is:

Which is an improvement on the darker image we had previously, regardless of the loss.... Oddly this process highlights a bright line at the top of the image, which implies we may have a parser issue in the basic data to resolve. However that'll do for today...