Detecting lines
By using open cv canny edge detection we can detect edges. But before detecting edges we apply a gaussian filter to the image to get rid of unwanted edges. It smoothens the image.
https://docs.opencv.org/4.x/da/d22/tutorial_py_canny.html
From canny we get the edge image. To get equations of the lines we have to randomly sample the local areas and find best fit lines. we can use them as equations of the edges.
Here the lines are provided in the rho, theta form.
0,0 coordinate is on top left corner of the screen.
x positive direction is left to right. y positive direction is in top to bottom.
Image is placed in a big canvas in case if the vanishing point exist outside of image we have to capture that.
Then if we plot the lines they do not overlay with the big image. we have to add the offset coordinates of the image in the center to every line in order to get the lines overlaying on top of image edges.
Now we have to find the vanishing point. To find the vanishing point first we consider two intersecting points . Then as an error metric we take the perpendicular distance to each of the lines. We find this error to every intersection. Then the lowest error gives the best vanishing point estimation.
Here the vanishing point is marked. To get the best estimate we have to minimize the unwanted lines in totally different directions as in the above figure. This figure we have got rid of these vertical lines by rejecting some angles based on a predefines value. It changes with the application.
import os
import cv2
import math
import numpy as np
Image = cv2.imread('/content/drive/MyDrive/Colab Notebooks/Vanishing point/Vanishing point 0.jpg')
# Converting to grayscale
GrayImage = cv2.cvtColor(Image, cv2.COLOR_BGR2GRAY)
# Blurring image to reduce noise.
BlurGrayImage = cv2.GaussianBlur(GrayImage, (5, 5), 1)
# Generating Edge image
EdgeImage = cv2.Canny(BlurGrayImage, 100, 200)
# Finding Lines in the image
Lines = cv2.HoughLinesP(EdgeImage, 1, np.pi / 180, 40, 500, 105)
# Exit if no lines are found
if Lines is None:
print("Not enough lines found in the image for Vanishing Pointdetection.")
exit(0)
Lines = cv2.HoughLines(EdgeImage, 1, np.pi/180, 100)
for line in Lines:
rho,theta = line[0]
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 10000*(-b))
y1 = int(y0 + 10000*(a))
x2 = int(x0 - 10000*(-b))
y2 = int(y0 - 10000*(a))
img=cv2.line(EdgeImage,(x1,y1),(x2,y2),(255,255,255),2)
cv2.imwrite('linesDetectedg.jpg', img)
REJECT_DEGREE_TH = 3.0
FilteredLines = [] # Filtered lines will be stored here
for Line in Lines: # Iterating over each line
[[rho,theta]] = Line # Getting the coordinates of the line
# if x1 != x2, slope can be found using regular equation
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho + 2*len(Image)
y0 = b*rho + 2*len(Image)
x1 = int(x0 + 10000*(-b))
y1 = int(y0 + 10000*(a))
x2 = int(x0 - 10000*(-b))
y2 = int(y0 - 10000*(a))
if x1 != x2:
m = (y2 - y1) / (x2 - x1)
# if x1 = x2, slope is infinity, thus a large value
else:
m = 100000000
c = y2 - m*x2 # c = y - mx from the equation of line
alpha = math.degrees(math.atan(m))
# theta will contain values between -90 -> +90.
# Calculating equation of the line: y = mx + c
# if x1 != x2, slope can be found using regular equation
# Storing lines of slope not near to 0 degree or +-90 degree
if REJECT_DEGREE_TH <= abs(alpha) <= 50 :
# length of the line
#print(alpha)
FilteredLines.append([rho,theta,m,c])
added_images = np.zeros((5*len(Image),5*len(Image[0]),3), np.uint8)
added_images[2*len(Image):3*len(Image),2*len(Image[0]):3*len(Image[0])] = Image
for line in FilteredLines:
rho,theta,m,c = line
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho + 2*len(Image)
y0 = b*rho + 2*len(Image)
x1 = int(x0 + 10000*(-b))
y1 = int(y0 + 10000*(a))
x2 = int(x0 - 10000*(-b))
y2 = int(y0 - 10000*(a))
bigimg=cv2.line(added_images,(x1,y1),(x2,y2),(255,255,255),2)
cv2.imwrite('img.jpg', bigimg)
# Initializing variables
VanishingPoint = None # Coordinates of the vanishing point
MinError = 100000000000 # Minimum error found (initially large value)
for i in range(len(FilteredLines)): # Iterating over lines and taking 2 at once
for j in range(i+1, len(FilteredLines)):
# Reading m and c values of the line
m1, c1 = FilteredLines[i][2], FilteredLines[i][3]
# Reading m and c values of the line
m2, c2 = FilteredLines[j][2], FilteredLines[j][3]
# If lines are not parallel
if m1 != m2:
# Finding (x0, y0)
x0 = (c1 - c2) / (m2 - m1)
y0 = m1 * x0 + c1
err = 0 # Error of this point
# Iterating over all lines for error calculation
for k in range(len(FilteredLines)):
# Reading m and c value of the line L
m, c = FilteredLines[k][2], FilteredLines[k][3]
# Calculation m and c of L_
m_ = (-1 / m)
c_ = y0 - m_ * x0
# Calculating (x_, y_) - point of intersection of L and L_
x_ = (c - c_) / (m_ - m)
y_ = m_ * x_ + c_
# Calculation distance between (x0, y0) and (x_, y_)
l = math.sqrt((y_ - y0)**2 + (x_ - x0)**2)
err += l**2 # Adding to error value
err = math.sqrt(err) # Finally taking sq root of error
# Comparing with minimum error value till now
# If present error value is lesser, updating minimum error value
# and vanishing point coordinates.
if 0<x0<len(bigimg) and 0<y0<len(bigimg[0]):
if MinError > err:
MinError = err
VanishingPoint = [x0, y0]
vanishing_pt_img=cv2.circle(bigimg, (int(VanishingPoint[0]),int(VanishingPoint[1])), 50, (255, 100, 30), 10)
cv2.imwrite('DotDetected.jpg', vanishing_pt_img)
#print((Lines))
print(VanishingPoint)
print( 20.0* np.pi/180)
#print(len(EdgeImage[0]))
#cv2.imwrite('addedimage.jpg', img)
#print(np.array(FilteredLines)[:,1])
#import matplotlib.pyplot as plt
#plt.hist(np.array(FilteredLines)[:,1])
#plt.show()
#vanishing_pt_img=cv2.circle(bigimg, (1000,1000), 20, (255, 100, 30), 10)
#cv2.imwrite('DotDetected.jpg', vanishing_pt_img)
But when this image is fed it takes lot of time. The reason for this is that if you look closely the texture of this image forms thousands of lines. Since there are two nested for loops in the code the time exponentiolly increases.
As a solution first I thought quantizing the image could be a good idea since all the pixels are mapped to two color values and hence it will increase the contrast. The expectation of this method was to get rid of the texture. But as you can see the texture still remains. The problem was not solved.
GrayImage = cv2.cvtColor(Image, cv2.COLOR_BGR2GRAY)
GrayImage = (85*(np.ceil(GrayImage/85))).astype(int)
print(np.max(GrayImage))
cv2.imwrite('GrayImage.jpg', GrayImage)
Gradient threshhold defines the number of lines in the image. To reduce the number of image increase the maxval, the upper threshold of the canny command.
By increasing the upper threshold if the number of lines is higher than 100 we can obtain a dynamic upper threshold code which is more general. It automatically controls the number of lines.
while(number_of_lines>100):
upper_thresh=upper_thresh +50
EdgeImage = cv2.Canny(BlurGrayImage, 100, upper_thresh)
Lines = cv2.HoughLines(EdgeImage, 1, np.pi/180, 100)
number_of_lines = len(Lines)
print(number_of_lines)