How FASTTag Works (Computer Vision in backend)?
On 27 December,2020, I was travelling to my native town for my vacation. During the travel we crossed 4 toll plazas and wondered how FASTag works. And as a coincidence in todays news Indian Government has mandated the use of FASTag from January 1 for all vehicles to pass through toll plazas. Now, Nitin Gadkari, Union minister for road transport, highways and MSMEs has confirmed that the FASTags would be mandatory for paying at toll plazas and enable contactless as well as electronic payments for toll payments.
What is FASTag?
A FASTag is a sticker that is attached to the windshield of your car, from the inside and has an imprinted Radio-frequency Identification (RFID) barcode which is linked with the registration details of your vehicle. As you drive though any toll plaza on any national highway in India, FASTag readers will read the code and will deduct the amount required to pass the barrier. All this without you having to stop, interact with a human being at a toll plaza and having to pay cash.
How to extract FASTTag stickers from Windshield?
To enable contactless and digital payment, first step will be to extract the FASTTag stickers from windshield. The challenges in identifying the stickers will differ to each car. In one car it might be on right side, in another car there is a possibility for multiple stickers in windshield as shown below.
Code Snippet
import cv2
import imutils
import os
import matplotlib.pyplot as plt
import numpy as np
from skimage.measure import compare_ssimoriginal = cv2.cvtColor(cv2.imread('img1.png'),cv2.COLOR_BGR2GRAY)
original = cv2.resize(original,(120,20))
plt.imshow(original,'gray')#visualizing the available images
plt.figure(figsize=(50,50))
for idx,image in enumerate(images):
#read the image
img = cv2.imread('images/'+image)
#opencv will be reading a image as BGR(Blue,Gree and Red) Channel
#following code will convert BGR--> RGB format
img = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
#adding known noise to the image
gray = cv2.GaussianBlur(img,(5,5),0)
#canny edge detection
edged = cv2.Canny(gray,10,130)
orig = img.copy()
#find all the contours and grab all the detected contours
cnts = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
#sort the contour based on are and take only top 10
cnts = sorted(cnts, key= cv2.contourArea, reverse=True)[:10]
#show image size as 50x50 pixels
plt.figure(figsize=(50,50))
#loop over the contours
for c in cnts:
#find the arc length for each detected contour
peri = cv2.arcLength(c,True)
#approximate the contours
apprx = cv2.approxPolyDP(c, 0.02*peri, True)
#convert the approximate contours as rectangle
x,y,w,h = cv2.boundingRect(apprx)
#crop the detected stickers image
image = img[y+7:y+h//3,x+(w)//4:x+w-(w)//4,:]
#convert cropped image to grayscale
image = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
#threshold the cropped image for better detection of fasttag
gray = cv2.threshold(image, 20, 150,
cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]
#erode the detected tags
kernel = np.ones((3,3),np.uint8)
erosion = cv2.erode(255-gray,kernel,iterations = 1)
erosion = cv2.resize(255-erosion,(120,20))
#compare all the detected stickers and compare with original fasttag
#draw rectange over the fasttag tags
if compare_ssim(erosion,original)>0.4:
cv2.rectangle(orig,(x,y),(x+w,y+h),(255,0,0),5)
cv2.putText(orig,str(area),(x,y),
cv2.FONT_HERSHEY_SIMPLEX,
1,(0,255,0),2)
plt.imshow(orig)
plt.axis('off')
plt.show()
The algorithm created worked fine and detected the fasttag stickers and bounded with red box.