且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

使用Opencv提取图像的公共部分

更新时间:2023-01-15 18:18:40

您应该将第一个图像扭曲到第二个图像上.您可以使用关键点对应关系给出的findHomographyperspectiveTransform函数.您可以在此处找到所需的大多数代码... >

更新


顺便说一句,今天我不得不做基本相同的事情.它已在灰色图像(Mat1b)上进行了测试,但仅需进行较小的更改即可应用于rgb图像(Mat3b). 这里是代码的相关部分:

Mat1b A = imread("...");
Mat1b B = imread("...");

vector<Point2f> ptsA; 
vector<Point2f> ptsB;

// Fill ptsA, ptsB with the points given by the match of your descriptors.

Mat H = findHomography(ptsA, ptsB, CV_RANSAC); // With ransac is more robust to outliers

Mat1b warpedA;
warpPerspective(A, warpedA, H, B.size());

// Now compute diff
Mat1b res;
absdiff(warpedA, B, res);

// res is what you are looking for!

I'm writing a program that find differences between images. For now, I'm finding features with AKAZE, so I've the common point of the 2 images. The problem is that these 2 images have only a part in common. How can I extract the common part from both images? For better explanation: I need to extract the common part from the first image and then from the second, so I can do absdiff for finding difference. I'm programming in c++

Thanks to all!

You should warp the first image onto the second. You can use findHomography and perspectiveTransform functions given by the correspondence of your keypoints. You can find most of the code you need here.

Update


Incidentally, I had to do basically the same stuff today. It's tested on gray images (Mat1b), but should require only minor changes to apply to rgb images (Mat3b). Here the relevant parts of the code:

Mat1b A = imread("...");
Mat1b B = imread("...");

vector<Point2f> ptsA; 
vector<Point2f> ptsB;

// Fill ptsA, ptsB with the points given by the match of your descriptors.

Mat H = findHomography(ptsA, ptsB, CV_RANSAC); // With ransac is more robust to outliers

Mat1b warpedA;
warpPerspective(A, warpedA, H, B.size());

// Now compute diff
Mat1b res;
absdiff(warpedA, B, res);

// res is what you are looking for!