Making Self-Driven Cars Better Drivers

Posted on 01/04/2019 by Tarun_Reddy in Industrial

AI has been making leaps and bounds, and its potential seems limitless. We see AI being used to help doctors diagnose disease, sniff out fraudulent transactions, and a whole range of other applications with a promising future.

But one thing that is difficult for machines, is predicting humans’ actions that don’t follow a specific pattern. It’s no secret that we don’t always behave according to logical, legal, or social rules.

For example, sometimes we walk down the street while checking and focusing on reading messages on our smartphone. By contrast, an intelligent machine would expect of us to be more aware of our surroundings—like traffic.

There Have Already Been Fatalities

Not following traffic rules can have fatal consequences. We learned this lesson with the tragic death of Elaine Herzberg in Arizona.

Herzberg was crossing the road when she got knocked over by a self-driven car. Experts believe that part of the reason the car didn’t detect her was that she was crossing the road outside of the crosswalk. It was a tragic event that highlighted the need to rethink how self-driven cars should be “taught” to drive better.

The University of Michigan’s Research Project

The University of Michigan started a research project to deal with this problem. It aims to teach vehicles to better predict actions a pedestrian might take. Researchers have been working on programming computers to understand how humans move.

They’ve taken the direction of focusing on the gait and symmetry of the human body, how we move, and how we place our feet. The idea is that if the machines can learn better how a human naturally moves, they’ll be better able to work out what a pedestrian’s next step might be, and whether or not they should take measures to avoid a collision.

How Are They Collecting Data?

The research team is using cameras. The video footage is then plugged into a simulation to create a full 3D image. Researchers have already logged thousands of different movement patterns.

This collection of data aims to teach computers how to recognize different potential movements based on the placement of the feet and body posture. The onboard computer is then able to tell the difference between someone who is stationary and someone who is starting to take a step.

The computer will draw on the database to create an accurate prediction of the path of one or a group of pedestrians. The system is being programmed to work from a distance of just under 50 meters. This will give the vehicle plenty of time to decide how to proceed and to enact evasion tactics if necessary.

Why is This Research Such a Big Deal?

The idea of teaching computers to predict movement in this manner is not all that new. What makes the university’s program so revolutionary, however, is that it uses 3D imaging that accounts for how people move in the space.

In the past, photos were used instead of videos. This provided an accurate catalog but made no allowances for how people might move in the space. This program takes not only the body position into consideration, as would be the case with photos, but also the gait of the person and the way that he or she moves.

The research group has also undertaken to teach the computers how the placement of feet might impact the likelihood of stumbling. So hopefully, a computer will be able to predict whether or not someone is likely to fall and make adjustments accordingly.

The Main Difference

It’s sort of like the difference between learning something by heart and learning through understanding the work. Show a computer thousands of photos of a traffic light, for example, and it’ll be able to recognize one in the real world. However, it won’t really understand, that a yellow light will soon turn to red.

On the other hand, if it is shown a video clip of the light moving through the different stages, from green to yellow and then red, it can better understand the process.

As computers can do all these calculations within seconds, the process should go seamlessly. Accidents like the one in Herzberg would be avoidable.

Is This the Right Direction to Take?

Most certainly. Think of all the cues you pick up daily from someone’s posture. If they have a determined gait, you can see that they are walking with purpose. You know that they’re likely more focused. If, on the other hand, they’re hunched over and checking messages, you’ll know that they’re distracted.

As a driver, you can make a split-second decision about how likely it is that a pedestrian will step into the road unexpectedly. It makes sense to teach machines how to make the same calculations.

You can also tell how fast someone is moving, how steady they are on their feet, and pick up on a number of different cues that could spell trouble. It’s a basic survival instinct—we need to be able to read and understand these kinds of cues to stay safe.

Final Notes

Overall, every new technology has problems. Fatalities could be prevented when it comes to self-driven cars. A new driver has to learn to interpret the actions of pedestrians, and self-driven cars need to do the same. The University of Michigan’s research will speed up that process significantly.

LikedLike this to see more

Spread the word

Flag this post

Thanks for helping to keep our community civil!


Notify staff privately
It's Spam
This post is an advertisement, or vandalism. It is not useful or relevant to the current topic.

You flagged this as spam. Undo flag.Flag Post