What’s next for driverless cars?

Engineers at the Southwest Research Institute in San Antonio, Texas have been developing autonomous vehicle technology for more than a decade. We take you inside the lab to see how the technology behind driverless cars has evolved.

TRANSCRIPT

Engineers at Southwest Research Institute in San Antonio, Texas have been developing autonomous vehicle technology for more than a decade.

We take you inside the lab to see how the technology behind driverless cars has evolved.

This is Marty, our autonomous vehicle research platform that we use for developing new autonomous vehicle technologies here at Southwest Research Institute.

This vehicle is fully actuated and capable of turning the steering wheel, changing gears, pressing on the brakes all by itself.

Up here we have a GPS antenna that the vehicle is able to use to get a rough idea of where it is in the world at any time.

GPS sensors are very common on vehicles now, but they're not necessarily accurate enough for autonomous driving.

They can jump around, and we can lose the signal as we get cloud cover or drive underneath trees or bridges.

We also have this LIDAR sensor.

This is constantly scanning the environment around us, giving us an idea of where buildings and things like that near the vehicle are.

The vehicle can use this to try and improve its idea of where it is by comparing a detected building to a known building.

This requires a lot of heavy computational power.

We're able to do lots of different autonomous functions to include off-road driving by using machine vision and very complicated algorithms that can determine the best path for a vehicle to take, when there's no road to be driven on.

What we're showing off here today is our Ranger system, and Ranger, you can't actually see on top of the vehicle.

It's a very nice alternative to being able to have the vehicle figure out where its position is.

Ranger is a camera underneath the vehicle.

It's mounted downward, so it's constantly taking pictures of the road surface as we drive around.

While the road surface looks like a random mix of rocks and cracks to us, the vehicle computer is able to build up a map of what that road surface looks like, and each time it gets a new camera image, it can figure out exactly where it is on that road surface by comparing the location of these unique cracks and rocks that are just randomly distributed throughout the road surface.

This gives us a very simple and robust way to get extremely high accurate position of the vehicle, to within just a couple of centimeters, that's not dependent on the GPS signal that we can lose.

This is really the core building block that we need for developing an autonomous vehicle and providing very reliable, very safe, and very comfortable behavior for the passengers.

Now we're going to put it in robotic mode.

The vehicle is going to switch gears and take full control of the vehicle.

So we are now in autonomous mode.

The computer switched gears into drive and is now fully in control of the vehicle.

So all the brake pedal, all the gas, and all the steering wheel is completely controlled by the vehicle software.

And you can see, as we're coming up to turns, the vehicle can anticipate that and slows us down to give us a nice comfortable ride.

Now we're coming up on this obstacle course.

We're gonna be flying through these cones.

And these cones are just here to show how accurate this Ranger localization system is.

We can do the same thing with GPS, but what happens is we have to record that route and then we can drive it maybe in the next hour or so.

After that, the GPS has drifted enough that it becomes unusable.

In fact the precision is more accurate than GPS.

And it also works in a GPS-denied environment, so we can combine these sensing technologies to be used with GPS and our other algorithms to give us some of the best autonomous vehicle systems in the world.