camera calibration demo/tutorial: https://sites.google.com/site/hongslinks/opencv-1/camera-calibration-opencv-python
//Installation manual, updated 2020 Feb 1, and 2019 jan 18 for opencv3 and opencv4
http://www.cse.cuhk.edu.hk/~khwong/www2/OSVIP/vs2017-opencv-Installation.docx
//test files
---------------------------------------------- cpp file ---------------------------------------------------------------------------
################### test.cpp hough line #################################
//Test houghlines , based on office pc , vs2010 , opencv300 32 bit
//Open a project in vs2013 (15 june 2017)
//
// 1) Win32 project (console application), say ConsoleApplication18
// 2) Copy content of houghlines.cpp (see below) to ConsoleApplication18.cpp
// 3) Make sure: Add #include "stdafx.h" to win32project1.cpp (first line)
// Source
// ConsoleApplication18.cpp : Defines the entry point for the console application.
//
// houghlines.cpp , input file is , see the c:\\images\\building.jpg
//Mat src = imread("c:\\images\\building.jpg", 0);
#include "stdafx.h"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <iostream>
using namespace cv; using namespace std;
static void help()
{
cout << "\nThis program demonstrates line finding with the Hough transform.\n"
"Usage:\n"
"./houghlines <image_name>, Default is ../data/pic1.png\n" << endl;
}
int main(int argc, char** argv)
{
cv::CommandLineParser parser(argc, argv, "{help h||}{@image|../data/pic1.png|}"
);
if (parser.has("help"))
{
help(); return 0;
}
string filename = parser.get<string>("@image"); if (filename.empty())
{
help();
cout << "no image_name provided" << endl; return ‐1;
}
//Mat src = imread(filename, 0);
Mat src = imread("c:\\images\\building.jpg", 0); if (src.empty())
{
help();
cout << "can not open " << filename << endl; return ‐1;
}
Mat dst, cdst;
Canny(src, dst, 50, 200, 3); cvtColor(dst, cdst, COLOR_GRAY2BGR);
#if 0
vector<Vec2f> lines;
HoughLines(dst, lines, 1, CV_PI / 180, 100, 0, 0);
for (size_t i = 0; i < lines.size(); i++)
{
}
#else
float rho = lines[i][0], theta = lines[i][1]; Point pt1, pt2;
double a = cos(theta), b = sin(theta); double x0 = a*rho, y0 = b*rho;
pt1.x = cvRound(x0 + 1000 * (‐b)); pt1.y = cvRound(y0 + 1000 * (a)); pt2.x = cvRound(x0 ‐ 1000 *
(‐b)); pt2.y = cvRound(y0 ‐ 1000 * (a));
line(cdst, pt1, pt2, Scalar(0, 0, 255), 3, CV_AA);
vector<Vec4i> lines;
HoughLinesP(dst, lines, 1, CV_PI / 180, 50, 50, 10); for (size_t i = 0; i < lines.size(); i++)
{
3, LINE_AA);
}
Vec4i l = lines[i];
line(cdst, Point(l[0], l[1]), Point(l[2], l[3]), Scalar(0, 0, 255),
#endif
}
imshow("source", src); imshow("detected lines", cdst);
waitKey(); return 0;
To make it work for 4point algorithm we need to use openGL display, require Glew and Glut
In property window, the project must be in 32 bit 1)
In c/C++ > general> additional include directory append :
C:\Glew and Glut\glew-1.11.0\include;C:\Glew and Glut\freeglut\include 2)
linker > genereral > Additional library directories, append: C:\Glew and Glut\freeglut\lib;C:\Glew
and Glut\glew-1.11.0\lib; 3)
linker > Input/addional dependencies, append: freeglut.lib;glew32.lib;%(AdditionalDependencies)
Installation guide (opencv3.0.0 vs2013, win10) 2 dec 2016
Updated 2 dec2016
Test only debug mode forget release mode, because if you use the wrong lib, it crashes Link only to
opencv_world310d.lib, do not link to opencv_world310d.lib
Ok opencv3.1.0, vs2013, win10
https://jugaadjump.wordpress.com/2016/03/06/install-opencv3-1-0-on-windows10/
1. Download OpenCV and VisualStudioExpress (or use vs2013)
2. Install Visual Studio Express, Extract OpenCV (e.g. opencv310)
3. In PC. , Edit System Environment Variables
• Add OPENCV_DIR with the address C:\opencv\build\x64\vc12
• (according to where you have extracted), add to path
%OPENCV_DIR%\bin
4. Make a new project in Visual Studio
Create a project (win32 console application), then change it to x64 as follows
5. Right click the project name>>select property>> and add a new property sheet under the Debug
| x64 folder, In configuration manager
6. Add active solution
7. Active solution >>select New >>select x64 (copy from win32)
8.
9.
Now change the properties of all projects you will create later
10. From the top Microsoft visual studio menu (top) select VIEW>>PROPERT WINDOW>>(lower right
windows Select Property manager), the right top window will show >> open Debug x64>> right-click
Microsoft.Cpp.x64.user, do the followings.
11. Make 5 (3+2) changes in the property sheet
1) C/C++->General->Additional Include Directories-> Add
$(OPENCV_DIR)\..\..\include
2) Linker->Input->Additional Dependencies option->Add opencv_world310d.lib
Better not add opencv_world310d.lib and keep debug mode always. If you really do need release mode,
add
this with care. If you debug and use the release lib (opencv_world310d.lib) , system may crash.
3) Linker->General->Addition.Lib.Directories-> add
$(OPENCV_DIR)\lib
Steps 4 and 5 may not be needed but no harm
4) Better to do this :In VC++ Directories>> add in Include directories:
$(OPENCV_DIR)\..\..\include
5) Better to do this :In VC++ Directories>> add in library directories:
$(OPENCV_DIR)\lib
12. Now you can create a new project with the above properties. Then after the project is
created, in the view/solution window:
13. Test and Display an image
• Select the project, and add a new C++ file (some examples are shown below) under the
source files option
• Paste the following code.
• Add #include "stdafx.h" to source
• Change the solution platform to x64 and select Build Solution
14. Open the Project2/x64/Debug folder and add an image
15. Open command prompt and change to the above folder and run it as Project2.exe image_name.jpg
To compile and debug: At the top menu of visual studio BUILD >> CLEAN SOLUTION>> REBUILD SOLUTION.
Then, Select the “Local window debug” icon to run Or simply type F5.
http://docs.opencv.org/3.0-
beta/doc/tutorials/introduction/windows_visual_studio_Opencv/windows_visual_studio_ Opencv.html
////////////////////////////////////////////////////////////////////////////////////////////////////
//
// example 1 begins--------------------------------------------------------
//display1.cpp
#include "stdafx.h"
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
using namespace cv; using namespace std;
int main(int argc, char** argv)
{
Mat image;
image = imread("c:\\images\\lena.jpg"); // Read the file
if (!image.data) // Check for invalid input
{
cout << "Could not open or find the image" << std::endl; return ‐1;
}
namedWindow("Display window", WINDOW_AUTOSIZE); // Create a window for display.
imshow("Display window", image); // Show our image inside it.
waitKey(0); // Wait for a keystroke in the window return 0;
}
// example 1 ends--------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////////////////////////
//
// example 2 begins--------------------------------------------------------
//====test program display a picture
//display2.cpp
#include "stdafx.h"
#include <opencv2/core.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/highgui.hpp>
#include <iostream>
using namespace cv; using namespace std;
int main(int argc, char** argv)
{
Mat image;
image = imread("C:\\opencv‐3\\abc.jpg", IMREAD_COLOR); // Read the file
// image = imread("C:\\opencv‐3\\abc.jpg", IMREAD_COLOR); // Read the file image =
imread("C:\\images\\lena.jpg"); // Read the file
if (image.empty()) // Check for invalid input
{
cout << "Could not open or find the image" << std::endl; return ‐1;
}
namedWindow("Display window", WINDOW_AUTOSIZE); // Create a window for display.
imshow("Display window", image); // Show our image inside it.
waitKey(0); // Wait for a keystroke in the window return 0;
}
// example 2 ends--------------------------------------------------------
////////////////////////////////////////////////////////////////////////////////////////////////////
//
// example 3 begins--------------------------------------------------------
//Lkemo.cpp
// test31.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include "opencv2/video/tracking.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
#include <ctype.h>
using namespace cv; using namespace std;
void help()
{
// print a welcome message, and the OpenCV version
cout << "\nThis is a demo of Lukas‐Kanade optical flow lkdemo(),\n" "Using OpenCV version %s\n" <<
CV_VERSION << "\n"
<< endl;
cout << "\nHot keys: \n"
"\tESC ‐ quit the program\n"
"\tr ‐ auto‐initialize tracking\n" "\tc ‐ delete all the points\n"
"\tn ‐ switch the \"night\" mode on/off\n"
"To add/remove a feature point click it\n" << endl;
}
Point2f pt;
bool addRemovePt = false;
void onMouse(int event, int x, int y, int flags, void* param)
{
if (event == CV_EVENT_LBUTTONDOWN)
{
pt = Point2f((float)x, (float)y); addRemovePt = true;
}
}
int main(int argc, char** argv)
{
VideoCapture cap;
TermCriteria termcrit(CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, 0.03);
Size winSize(10, 10);
const int MAX_COUNT = 500; bool needToInit = false; bool nightMode = false;
if (argc == 1 || (argc == 2 && strlen(argv[1]) == 1 && isdigit(argv[1][0])))
cap.open(argc == 2 ? argv[1][0] ‐ '0' : 0); else if (argc == 2)
cap.open(argv[1]);
if (!cap.isOpened())
{
cout << "Could not initialize capturing...\n"; return 0;
}
help();
namedWindow("LK Demo", 1); setMouseCallback("LK Demo", onMouse, 0);
Mat gray, prevGray, image; vector<Point2f> points[2];
for (;;)
{
Mat frame; cap >> frame;
if (frame.empty()) break;
frame.copyTo(image); cvtColor(image, gray, CV_BGR2GRAY);
if (nightMode)
image = Scalar::all(0);
if (needToInit)
{
Mat(), 3, 0, 0.04);
termcrit);
}
// automatic initialization
goodFeaturesToTrack(gray, points[1], MAX_COUNT, 0.01, 10, cornerSubPix(gray, points[1], winSize,
Size(‐1, ‐1), addRemovePt = false;
else if (!points[0].empty())
{
vector<uchar> status; vector<float> err; if (prevGray.empty())
gray.copyTo(prevGray); calcOpticalFlowPyrLK(prevGray, gray, points[0], points[1],
status, err, winSize,
3, termcrit, 0); size_t i, k;
for (i = k = 0; i < points[1].size(); i++)
{
if (addRemovePt)
{
if (norm(pt ‐ points[1][i]) <= 5)
{
addRemovePt = false; continue;
}
}
if (!status[i])
continue;
8);
points[1][k++] = points[1][i];
circle(image, points[1][i], 3, Scalar(0, 255, 0), ‐1,
}
points[1].resize(k);
}
if (addRemovePt && points[1].size() < (size_t)MAX_COUNT)
{
vector<Point2f> tmp; tmp.push_back(pt);
cornerSubPix(gray, tmp, winSize, cvSize(‐1, ‐1), termcrit); points[1].push_back(tmp[0]);
addRemovePt = false;
}
needToInit = false; imshow("LK Demo", image);
char c = (char)waitKey(10); if (c == 27)
break; switch (c)
{
case 'r':
needToInit = true; break;
case 'c':
points[1].clear(); break;
case 'n':
nightMode = !nightMode; break;
default:
;
}
std::swap(points[1], points[0]); swap(prevGray, gray);
}
return 0;
}
////////////////////////////////////////////////////////////////////////////////////////////////////
//
// example 4 begins--------------------------------------------------------
///////camshift.cpp camshift-demo from openc3.1-sample
#include "stdafx.h"
#include <opencv2/core/utility.hpp>
#include "opencv2/video/tracking.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/videoio.hpp"
#include "opencv2/highgui.hpp"
#include <iostream>
#include <ctype.h>
using namespace cv; using namespace std;
Mat image;
bool backprojMode = false; bool selectObject = false; int trackObject = 0;
bool showHist = true; Point origin;
Rect selection;
int vmin = 10, vmax = 256, smin = 30;
static void onMouse(int event, int x, int y, int, void*)
{
if (selectObject)
{
selection.x = MIN(x, origin.x); selection.y = MIN(y, origin.y); selection.width = std::abs(x ‐
origin.x); selection.height = std::abs(y ‐ origin.y);
selection &= Rect(0, 0, image.cols, image.rows);
}
switch (event)
{
case EVENT_LBUTTONDOWN:
origin = Point(x, y); selection = Rect(x, y, 0, 0); selectObject = true;
break;
case EVENT_LBUTTONUP:
selectObject = false;
if (selection.width > 0 && selection.height > 0) trackObject = ‐1;
break;
}
}
static void help()
{
cout << "\nThis is a demo that shows mean‐shift based tracking\n"
"You select a color objects such as your face and it tracks it.\n" "This reads from video camera (0
by default, or the camera number
the user enters\n"
"Usage: \n"
" ./camshiftdemo [camera number]\n";
cout << "\n\nHot keys: \n"
"\tESC ‐ quit the program\n" "\tc ‐ stop the tracking\n"
"\tb ‐ switch to/from backprojection view\n" "\th ‐ show/hide object histogram\n"
"\tp ‐ pause video\n"
"To initialize tracking, select the object with mouse\n";
}
const char* keys =
{
"{@camera_number| 0 | camera number}"
};
int main(int argc, const char** argv)
{
help();
VideoCapture cap; Rect trackWindow; int hsize = 16;
float hranges[] = { 0, 180 }; const float* phranges = hranges;
CommandLineParser parser(argc, argv, keys); int camNum = parser.get<int>(0);
cap.open(camNum); if (!cap.isOpened())
{
help();
cout << "***Could not initialize capturing...***\n"; cout << "Current parameter's value: \n";
parser.printMessage();
return ‐1;
}
namedWindow("Histogram", 0);
namedWindow("CamShift Demo", 0); setMouseCallback("CamShift Demo", onMouse, 0);
createTrackbar("Vmin", "CamShift Demo", &vmin, 256, 0);
createTrackbar("Vmax", "CamShift Demo", &vmax, 256, 0);
createTrackbar("Smin", "CamShift Demo", &smin, 256, 0);
Mat frame, hsv, hue, mask, hist, histimg = Mat::zeros(200, 320, CV_8UC3), backproj;
bool paused = false;
for (;;)
{
if (!paused)
{
cap >> frame;
if (frame.empty()) break;
}
frame.copyTo(image);
if (!paused)
{
cvtColor(image, hsv, COLOR_BGR2HSV);
if (trackObject)
{
int _vmin = vmin, _vmax = vmax;
inRange(hsv, Scalar(0, smin, MIN(_vmin, _vmax)), Scalar(180, 256, MAX(_vmin, _vmax)), mask);
int ch[] = { 0, 0 }; hue.create(hsv.size(), hsv.depth()); mixChannels(&hsv, 1, &hue, 1, ch, 1);
if (trackObject < 0)
{
selection);
&phranges);
Mat roi(hue, selection), maskroi(mask, calcHist(&roi, 1, 0, maskroi, hist, 1, &hsize,
normalize(hist, hist, 0, 255, NORM_MINMAX);
trackWindow = selection; trackObject = 1;
histimg = Scalar::all(0);
int binW = histimg.cols / hsize; Mat buf(1, hsize, CV_8UC3);
for (int i = 0; i < hsize; i++) buf.at<Vec3b>(i) =
Vec3b(saturate_cast<uchar>(i*180. / hsize), 255, 255);
cvtColor(buf, buf, COLOR_HSV2BGR);
for (int i = 0; i < hsize; i++)
{
int val = saturate_cast<int>(hist.at<float>(i)*histimg.rows / 255);
rectangle(histimg, Point(i*binW,
histimg.rows), val),
Point((i + 1)*binW, histimg.rows ‐
Scalar(buf.at<Vec3b>(i)), ‐1, 8);
}
}
calcBackProject(&hue, 1, 0, hist, backproj, &phranges); backproj &= mask;
RotatedRect trackBox = CamShift(backproj, trackWindow, TermCriteria(TermCriteria::EPS |
TermCriteria::COUNT, 10, 1));
if (trackWindow.area() <= 1)
{
r = (MIN(cols, rows) + 5) / 6; trackWindow.y ‐ r,
}
int cols = backproj.cols, rows = backproj.rows, trackWindow = Rect(trackWindow.x ‐ r,
trackWindow.x + r, trackWindow.y + r) & Rect(0, 0, cols, rows);
LINE_AA);
if (backprojMode)
cvtColor(backproj, image, COLOR_GRAY2BGR); ellipse(image, trackBox, Scalar(0, 0, 255), 3,
}
}
else if (trackObject < 0) paused = false;
if (selectObject && selection.width > 0 && selection.height > 0)
{
Mat roi(image, selection); bitwise_not(roi, roi);
}
imshow("CamShift Demo", image); imshow("Histogram", histimg);
char c = (char)waitKey(10); if (c == 27)
break; switch (c)
{
case 'b':
backprojMode = !backprojMode; break;
case 'c':
trackObject = 0;
histimg = Scalar::all(0); break;
case 'h':
showHist = !showHist; if (!showHist)
destroyWindow("Histogram");
else
break; case 'p':
namedWindow("Histogram", 1);
paused = !paused; break;
default:
;
}
}
return 0;
}
// example 4 ends--------------------------------------------------------
///////////////////////////////////////////////////////////////////// Additional information
From https://msdn.microsoft.com/en-us/library/9yb4317s.aspx
set up C++ applications to target 64-bit platforms
1. Open the C++ project that you want to configure.
2. Open the property pages for that project. For more information, see How to: Open Project
Property Pages.
Note
For .NET projects, make sure that the Configuration Properties node, or one of its child nodes, is
selected in the <Projectname> Property Pages dialog box; otherwise, the Configuration Manager
button remains unavailable.
3. Choose the Configuration Manager button to open the Configuration Manager
dialog box.
4. In the Active Solution Platform drop-down list, select the <New…> option to open the New
Solution Platform dialog box.
5. In the Type or select the new platform drop-down list, select a 64-bit platform.
Note
In the New Solution Platform dialog box, you can use the Copy settings from
option to copy existing project settings into the new 64-bit project configuration.
6. Choose the OK button. The platform that you selected in the preceding step appears under
Active Solution Platform in the Configuration Manager dialog box.
7. Choose the Close button in the Configuration Manager dialog box, and then choose the OK button
in the <Projectname> Property Pages dialog box.
To copy Win32 project settings into a 64-bit project configuration
• When the New Solution Platform dialog box is open while you set up a project to target a
64-bit platform, in the Copy settings from drop-down list, select Win32. These project settings are
automatically updated on the project level:
o The /MACHINE linker option is set to /MACHINE:X64.
o Register Output is turned OFF. For more information, see Linker Property Pages.
o Target Environment is set to /env x64. For more information, see MIDL Property Pages: General.
o Validate Parameters is cleared and reset to the default value. For more information, see MIDL
Property Pages: Advanced.
o If Debug Information Format was set to /ZI in the Win32 project configuration, then it is set to
/Zi in the 64-bit project configuration. For
mation, see /Z7, /Zi, /ZI (Debug Information Format).
mation, see /Z7, /Zi, /ZI (Debug Information Format).