Google’s project for 2017, and really for all of silicon valley is the self driving car. Multiple tests are being carried out everyday concerning mapping algorithms, motion detection, quick response and synchronizing with the city traffic system to give the smoothest and safest drive possible and the improvement is starting to show.
Google’s extensive testing has led them to go back to the drawing board many times to think of solutions to challenging problems. For instance, there was the problem of reacting to stop signs. To solve this, Google thought of mapping all of the stop signs in the world which on hindsight is a stupid idea, but they found a better solution, recognition. Just like Google Glass recognizes faces, the self driving car will recognize stop signs based on a general description, whether they are stop signs on roads or in the hands of traffic wardens.
For pesky motorcyclists and bicyclists, Google fed data into the car based on many real life situations and then left the car to make up its own mind which way to go. The car even reacted to changes in the patterns of behavior of cyclists when put to the test.
This is where it looks like Google is really excelling, programming cars to adapt. After all it doesn’t matter how good your car is at recognizing stop signs when it cannot account for something unpredictable or human.
Obviously this technology has a long way to go. Some cars just stop dead in their tracks when it comes to executing a tricky manoeuvre to get out of a tight spot or don’t react at all to stupid bikers breaking the law. 2017 is coming but it may not be time for self driving cars just yet.