Point matching using rich feature descriptors
Now, we will make use of our constraint equations to calculate the essential matrix. To get our constraints, remember that for each point in image A, we must find a corresponding point in image B. We can achieve such a matching using OpenCV's extensive 2D feature-matching framework, which has greatly matured in the past few years.
Feature extraction and descriptor matching is an essential process in Computer Vision, and is used in many methods to perform all sorts of operations, for example, detecting the position and orientation of an object in an image or searching a big database of images for similar images through a given query. In essence, feature extraction means selecting points in the image that would make for good features and computing a descriptor for them. A descriptor is a vector of numbers that describes the surrounding environment around a feature point in an image. Different methods have different length and data types for their descriptor vectors. Descriptor Matching is the process of finding a corresponding feature of one set in another using its descriptor. OpenCV provides very easy and powerful methods to support feature extraction and matching.
Let's examine a very simple feature extraction and matching scheme:
vector<KeyPoint> keypts1, keypts2; Mat desc1, desc2; // detect keypoints and extractORBdescriptors Ptr<Feature2D>orb = ORB::create(2000); orb->detectAndCompute(img1, noArray(), keypts1, desc1); orb->detectAndCompute(img2, noArray(), keypts2, desc2); // matching descriptors Ptr<DescriptorMatcher>matcher =DescriptorMatcher::create("BruteForce-Hamming"); vector<DMatch> matches; matcher->match(desc1, desc2, matches);
You may have already seen similar OpenCV code, but let's review it quickly. Our goal is to obtain three elements: feature points for two images, descriptors for them, and a matching between the two sets of features. OpenCV provides a range of feature detectors, descriptor extractors, and matchers. In this simple example, we use the ORB class to get both the 2D location of Oriented BRIEF (ORB)(where, BRIEF stands for Binary Robust Independent Elementary Features) feature points and their respective descriptors. ORB may be preferred over traditional 2D features such as the Speeded-Up Robust Features (SURF) or Scale Invariant Feature Transform (SIFT) because it is unencumbered with intellectual property and shown to be faster to detect, compute, and match.
We use a bruteforce binary matcher to get the matching, which simply matches two feature sets by comparing each feature in the first set to each feature in the second set (hence the phrasing bruteforce).
In the following image, we will see a matching of feature points on two images from the Fountain P11 sequence can be found at h t t p ://c v l a b . e p f l . c h /~s t r e c h a /m u l t i v i e w /d e n s e M V S . h t m l:

Practically, raw matching like we just performed is good only up to a certain level, and many matches are probably erroneous. For that reason, most SfM methods perform some form of filtering on the matches to ensure correctness and reduce errors. One form of filtering, which is built into OpenCV's brute-force matcher, is cross-check filtering. That is, a match is considered true if a feature of the first image matches a feature of the second image, and the reverse check also matches the feature of the second image with the feature of the first image. Another common filtering mechanism, used in the provided code, is to filter based on the fact that the two images are of the same scene and have a certain stereo-view relationship between them. In practice, the filter tries to robustly calculate the fundamental or essential matrix which we will learn about in the Finding camera matrices section and retain those feature pairs that correspond with this calculation with small errors.
An alternative to using rich features, such as ORB, is to use optical flow. The following information box provides a short overview of optical flow. It is possible to use optical flow instead of descriptor matching to find the required point matching between two images, while the rest of the SfM pipeline remains the same. OpenCV recently extended its API to get the flow field from two images and now it is faster and more powerful.
Optical flow is the process of matching selected points from one image to another, assuming both images are part of a sequence and relatively close to one another. Most optical flow methods compare a small region, known as the search window or patch, around each point from image A to the same area in image B. Following a very common rule in Computer Vision, called the brightness constancy constraint (and other names), the small patches of the image will not change drastically from one image to the other, and therefore the magnitude of their subtraction should be close to zero. In addition to matching patches, newer methods of optical flow use a number of additional methods to get better results. One is using image pyramids, which are smaller and smaller resized versions of the image, which allow for working from-coarse-to-fine, a very well-used trick in Computer Vision. Another method is to define global constraints on the flow field, assuming that the points close to each other move together in the same direction. A more in-depth review of optical flow methods in OpenCV can be found in a chapter named Developing Fluid Wall Using the Microsoft Kinect which is available on the Packt website.
- Learning Microsoft Windows Server 2012 Dynamic Access Control
- 數據庫系統教程(第2版)
- C++面向對象程序設計(微課版)
- Django開發從入門到實踐
- 樂學Web編程:網站制作不神秘
- Learn Programming in Python with Cody Jackson
- 區塊鏈:以太坊DApp開發實戰
- aelf區塊鏈應用架構指南
- Go并發編程實戰
- Visual Basic程序設計習題與上機實踐
- Raspberry Pi Robotic Projects(Third Edition)
- 深入實踐Kotlin元編程
- Beginning C++ Game Programming
- Oracle數據庫編程經典300例
- R語言數據挖掘:實用項目解析