Car Collision Prevention Project – Disparity using OpenCV

To get distances to objects, there are one of two options, either use sizes of known objects such as car widths or license plates or use a disparity mapping. I decided to start working on the disparity approach which gives a new image of far and near objects, without an exact distance.

The problem with using the size of known objects is that they might not be detected in the image and that even a known objects might have a varying size. For example, if we were using vehicle detection there is no minimum vehicle width on the road that we can use as a standard against seeing how far all detected vehicles are. Furthermore, if we knew the narrowest vehicle on the road, we would need to use its width since it would be the worst case scenario. For example, a truck would show up as closer to our car even though it is the same distance away as a small car. Other than this flaw of not having a standard width for all vehicles, the problem is that this method would only detect against vehicles as possible collision objects. This is a huge flaw other objects such as cyclists, pedestrians and other objects would all have to be added manually. In addition to all of these flaws, using license plates as a standard to see how far objects are would be even harder as they would be harder to recognise in our image until they are already close to our camera.

The other approach is using a disparity map which gives a relative distance layout of an image generated from both cameras. The issue with this approach is that the distance values are relative, and aren’t really a distance. For our testing purposes we can map these values to real distances, but might not be ideal for a real car collision prevention system. Furthermore, there could be some sources of error such as light shining in a particular part of the image. This problem possibly can be solved by using a filter on top of the camera or using erosion from mathematical morphology.

Setting up Disparity

First off, from my code below I changed the color of my frames to grayscale and my performance went up from around 14 FPS to around 16.5 FPS.

Secondly, the version of OpenCV from the official repository is an older version so upgrading to 3.0+ would be beneficial. One good guide can be found atĀ https://github.com/jabelone/OpenCV-for-Pi, which requires you to be running Raspbian JessieĀ . The version installed using sudo apt-get install python-opencv installs version 2.4.9.

Getting Bad Results

As you can see from the example above, I didn’t get the results I wanted. I did some research to see if I need to calibrate my cameras in order to get better results. Through my research it seems like the cameras don’t need to be calibrated (we don’t need to find the matrix for our two cameras) but I do need to optimise the StereoBM function’s parameters to get the best possible output. In the next blog post I will go over what I did to improve my current result.

The code for the part above:

Display Two Cameras with Normalization