Integration of KAZE 1.6 in OpenCV
A new version of KAZE and AKAZE features is a good candidate to become a part of OpenCV. So i decided to update KAZE port i made a while ago with a new version of these features and finally make a pull request to make it a part of OpenCV.
KAZE are now a part of OpenCV library
The OpenCV has accepted my pull-request and merged KAZE port into master branch of the OpenCV library. KAZE and AKAZE features will become available in OpenCV 3.0. Of course, you can grab development branch and build it from scratch to access it now.
** Looking for source code? It's all there: [KAZE & AKAZE in OpenCV][kaze-branch]. **
I’m going to keep KAZE sources intact if possible to simplify their further support.
Original KAZE and AKAZE implementations will be placed in
akaze/ folders under
features2/ module and what we want is to write a facade-wrappers for these algorithms.
To integrate KAZE featues we need to adopt sources code to OpenCV coding guidelines, make consistent with headers include system, integrate into build system, implement wrapped from KAZE to Features2D API and add unit tests. This will be split into three steps:
- Adopt KAZE and AKAZE sources (Remove unused functions, fix includes, macros)
- Implement Features2D wrappers and expose properties for runtime configuration of KAZE.
- Add unit tests and remove duplicate functions that are already exists in OpenCV.
- Replace OpenMP with cv::parallel_for_
Changes in KAZE sources
First, we need to adopt existing sources.
Step 1 -Update OpenCV includes
Since we’re making KAZE a part of the library, it’s impossible to reference OpenCV types using standard.
Instead one may want to use precomp.hpp header like this:
Step 2 - Cleaning up the code
p There is a C-style assert(cond) macro that I will replace with CV_Assert for convinience.
Dump of KAZE internal structures
KAZE and AKAZE algorithm can ‘dump’ internal buffers to disk using imwrite function.
But features2d module can not be available and i assume the goal of this feature was to simplify debugging of KAZE features.
Since we may expect it is mature enough, we will remove these functions (
Save_Nonlinear_Scale_Space) from sources.
Fixing the PI constant
M_PI symbol to represent Pi number. We will use
CV_PI replacement instead.
Helper file utils.cpp contains auxilar functions that is not used by KAZE directly but rather used for precision esitmation. We don’t need these functions in OpenCV packages. So we say goodbye to following functions:
Step 3 - Fix constant expression bug in compute_derivative_kernels
This is very similar to a bug - a static array get initialized with ksize that is not compile-time defined.
We can quickly fix this issue with std::vector:
Step 5 - Wrapping KAZE for OpenCV
OpenCV provides three base types for extending features2d API:
Since KAZE features provides both detector and descriptor extractor features, we will derive
our class from
First, we should implement helper functions to indicate depth and size of feature descriptor and matcher type:
In order to make OpenCV happy we need to implement three virtual functions from Feature2D:
operator() also known as detectAndCompute.
Step 6 - Detection of KAZE keypoints:
Please note that we conver input image to grayscale normalized to [0;1] floating-point image. This is a requirement of KAZE algorithm.
Step 7 - Extraction of KAZE descriptors:
Two asserts at the end of the function to ensure that KAZE returns consistent with
Step 8 - Detection and extraction at single call:
Step 9 - Algorithm configuration
OpenCV provides an option to create and configure algorithm in runtime by it’s name. This is done by using special CV_INIT_ALGORITHM marco, that initialize all OpenCV algorithms during startup. Using this macro we register new KAZE and AKAZE algorithms under Feature2D module and expose additional properties that user can change in runtime:
Please note, that KAZE and AKAZE has much more properties. They (and documentation for them) will be added later.
Step 10 - Unit tests
There is a nice feature detectors and descriptors unit testing system in OpenCV. Using it is very simple, but it performs many sanity checks and validates both parths of Features2D API.
All we need to do is to add a new unit test suites:
In addition to simple checks that our implementation does some job, there are more sophisticaed tests to verify rotation and scale invariance of the computed features.
Step 11 - Enabling multithreading
Both, feature detection and extraction stage can be made faster by using multi-threading. Fortunately, AKAZE designed very clear and one may are find OpenMP instructions in critical sections:
But OpenCV uses abstraction layer for multithreading called
cv::parallel_for_. Personally I think it’s very wise architectural desing decision since it allows to get rid of specific cavetas for particular threading backends (OpenCV, TBB, Concurrency, GCD, etc). You can read more about using cv::parallel_for_ in one of my previous posts or visit OpenCV documentation.
For instance, here is how to parallelize building of the nonlinear scale space for AKAZE. The old version of OpenMP version of
Compute_Multiscale_Derivatives function looked like this:
cv::parallel_for_ we introduce an ‘invoker’ function object that perform a discrete piece of job on small subset of whole data. Threading API does all job on scheduling multithreaded execution among worker threads:
After integration, I ran KAZE and AKAZE using feature descriptor estimation framework to see how they perform. I was really impressed about matching precision on rotation and scaling tests. Look at these self-explaining charts where AKAZE beats all other features!