[Final Year Project] FPGA based Image Mosaicing using AI (log #6)
Each Keypoint extracted using the SIFT algorithm is represented using a 128-dimensional vector known as the descriptor vector.
In the code –
sift=cv2.xfeatures2d.SIFT_create() kp,desc = sift.detectAndCompute(img,None)
kp represents the array of keypoints and desc stores the array of descriptors for each keypoint.
The descriptors are used to compare keypoints in the two images.
Brute Force Matcher
We make use of Brute Force Matcher to determine the matches between similar key points in both the images. The matches are evaluated on the basis of Euclidean Distance between the two key points.
Euclidean distance of every keypoint in the first image is calculated with every other keypoint in the second image. The good matches are then separated by certain minimum distance criteria.
In the following code, the previous program of feature extraction is used to provide the keypoints and descriptors for the matching process.https://gist.github.com/dhairyagada/df840d431fdbb0b5cd279ef050651608In the above code ‘minlimit’ is used to identify the good matches among all the matches that are obtained by the BFMatcher.If we increase minlimit more number of matches are obtained and if we decrease the minlimit – the number of matches decrease.