Underwater Autonomy

A group at Pennsylvania State University is trying to build an autonomous system that can operate underwater. This unforgiving environment is pushing research into new territory.


A group at Pennsylvania State University in State College, Pennsylvania, is trying to build an autonomous system that can operate underwater.

This unforgiving environment is pushing researching into new territory.

Here's the story.

[ Indistinct conversations ] [ Birds calling ]

Come on up here.

Yeah, yeah.

So that's a no-go?

So I have...


[ Beeping ]

[ Humming ] [ Engine roaring ]

By now, you've probably guessed that this is no ordinary boat trip, and this is no toy.

It's called an Iver, an autonomous underwater vehicle, but members of the Applied Research Lab at Penn State aren't really working on the vehicle.

They're more concerned with its mind.

So we're going to have it drive on the surface initially at a 50-degree heading, and when it picks up speed, then we're going to start diving.

It's going to hold a 7-degree downward pitch until it reaches its target depth, and then it's going to level out and try to maintain that depth.

The team, led by John Sustersic, wants to build a self-driving submarine, but this project is a little more complicated than that.

We're trying to develop an autonomous system, an unmanned system, that can interact with people as well as a manned system would, and so the number of, you know, technical and theoretical challenges that have to be overcome there is pretty big.

The system is called MANTA, Multiagent Architecture for Natural and Trusted Autonomy.

Let's start with the multiagent part first.

It's built to mimic the command structure of an actual submarine.

There's a commanding officer, a navigator, officer of the deck, a pilot and an engineer, and these autonomous agents or computer programs use the same language that you would hear on a sub, so when the officer of the deck tells the pilot to dive, it uses the word 'dive.'

By allowing the agent to interact via natural language, then an outsider observing the operation of the system will understand the high-level decision-making that's in play, and, really, we can have humans intermixed with the autonomous agents.

So you could have a human commanding officer issuing orders to the rest of the autonomous team.

That's the ultimate goal -- a trusted autonomous system that can operate without any humans or just a few.

To build it, Sustersic built a diverse team, one that includes engineers, programmers and a mathematician.

Most of what I'm looking at has to do with the learning aspect.

We have right now an entire architecture devoted to logic, right, but if you really think about what makes an organism robust in a situation, one of the most important components is that it's adaptable, and that adaptability requires learning.

What I'm thinking about is how do we take in sort of the raw data of the world, and how do we learn from it so that we have a platform that can adjust to novel circumstances?

It's a problem Sustersic compares to teaching a teenager how to drive.

Safety comes with experience.

Our belief is that by enabling learning in that constrained way, then the system will become better at doing what it does, and so we'll be able to relax the constraints and give it more decision authority over time, and which way we keep everyone happy.

At the lake, the team sets clear constraints and objectives before any mission begins.

We're going to come back up, get a GPS fix, come back around parallel, reverse course and take another swath of imagery.

The results can be checked and tweaked on-site.

They can also be duplicated in the lab using a popular video-game engine.

So this is our MANTA simulator.

It is actually a two-part simulator.

All we do is put it in the environment, and we have the Soar agents command what our vehicle is doing, and we can see real-time what it looks like, and we also have a visualizer, which takes in data from physical tests, like the ones out at the lake, and we can put that data in here and actually replay the mission that took place out there on a screen.

This continual testing is vital to developing agents capable of completing complex tasks and building the trust needed for human interaction.

So many of the challenges that the autonomous system has to face -- 'Is it safe to go here?

How much risk do I assume if I take this particular path versus that particular path?'

-- Those are the things that people are doing all the time and things that this system needs to demonstrate to people's satisfaction that it will make reasonable decisions.

When you think about computers, they are part of a long line of technology that's designed to do all the things that humans are bad at.

They're designed to record information perfectly.

They're designed to compute very quickly.

But the current age of technology is interested in doing the things that humans are good at.

It's sort of working with fuzzy information.

It's transferring knowledge from one domain to another.

And when it comes to that transfer, the team at ARL is just beginning to scratch the surface.