In this episode of SciTech Now, we learn about an innovative program that enables caregivers to experience dementia; explore the innovation in humanitarian relief; a look at how artificial intelligence is helping the human race; and a group trying to build an autonomous system that can operate underwater.
SciTech Now Episode 426
Coming up, understanding Alzheimer's disease.
In the tour, you can really experience somewhat what a person is going through.
Innovation in humanitarian relief.
It's knowing what to give.
It's knowing what are the right organizations to work through.
There are so many ways that technology can help.
A look at how artificial intelligence is helping the human race.
AI always seems like something that is in the future, that in reality, it's in the right here and the right now.
We're going to start diving.
It's going to hold a 7-degree downward pitch until it reaches its target depth.
It's all ahead.
I'm Hari Sreenivasan.
Welcome to 'SciTech Now,' our weekly program bringing you the latest breakthroughs in science, technology and innovation.
Let's get started.
More than 45 million people worldwide live with some form of Alzheimer's disease or dementia.
An innovative program at the University of Texas Health School of Nursing in Houston, Texas, enables caregivers to experience the symptoms of dementia themselves.
Here's a look.
UT Health has established the Glenn Biggs Institute for Alzheimer's and Neurodegenerative Diseases, and the vision of that institute is to provide comprehensive care for patients with some form of dementia, most commonly Alzheimer's disease, and their family caregivers, and we know that family caregivers are the most important resource for people with Alzheimer's disease and are really their lifeline to the world outside of themselves, and so as part of providing this comprehensive program for people, the School of Nursing has established the Caring for the Caregivers Center.
An innovative part of the Caring for the Caregivers workshop is called the Virtual Dementia Tour, which simulates how it feels to be stricken with the condition.
The Virtual Dementia Tour is an important part of this workshop.
It seems quite simple.
The whole concept of it seems very simple.
We put some headphones on someone, which really tries to simulate what it's like to not really feel in touch with the world outside of you, not to understand what people are saying to you.
We put goggles on, which simulate what happens as the dementia progresses and people lose their peripheral vision.
We have things on their hands so they don't have the fine movement, and then we have things on their feet, which many older people get peripheral neuropathies, and that simulates that, so it really tricks the brain into living in the world as someone with dementia.
The Virtual Dementia Tour is a way in which we can simulate the situation that a person really experiences, and so, for example, a caregiver can go through, perhaps, the first time that they understand what Daddy is going through.
Never before have they really felt it.
They've watched it, seen it, but in the tour, you can really experience somewhat what a person is going through.
After the Virtual Dementia Tour, caregivers share their experience through a debriefing that can be quite emotional.
Now that my dad is at the end, has this condition, I think, towards the end already...
...so I'm, you know, experiencing what he's experiencing...
...and it's sad.
You had no idea that it was... Uh-huh.
I do because I teach a lot of this stuff, and I, you know, work with it but not at this level...
That's pretty profound that academically you know this...
...and really, in terms of researching it, you know this, but how he feels...
...have no idea.
Have no idea.
Actually going through it, it really made me understand how someone, a family member, a patient, a client with dementia, what they have to go through, so it's just... You know, I can understand them now.
I understand, like, why it's so scary, you know, why they do get upset, why they get combative because they just don't know.
You know, they're kind of lost in their body, you know, so it's just a newfound appreciation, you know, and I think, for me, it's just...And I think that's why it's just so humbling.
Now it's my turn to experience the Virtual Dementia Tour.
First, I'm given five simple tasks to perform.
With my hearing impaired by the white noise in the headphones, the instructions are intentionally difficult to understand to further simulate the experience of a person with dementia.
I'm going to be giving you a set of instructions.
I will only be able to read it once, so try and follow as best as you can.
So belting lope.
Put coffee fridge.
Cloak past five eight.
Med two times three.
Going in the room, and again the vision is impaired.
With the shoe inserts, it's not easy to walk, and just with the hearing loss and the white noise in my ear, it's hard to focus.
Because you're a bit confused and because your mobility is affected... and now I hear a siren.
[ Siren wailing ] And when you hear the siren, that kind of resets your train of thought as well.
The siren is confusing.
We associate sirens with danger.
But the more you go through this and the more disoriented you are, the more you completely forget what he told you.
There's just a state of confusion here, so I'm just kind of walking around to see if this might trigger or help me to remember what the sets of instructions were.
I'm stunned walking out of that experience.
I learned a lot, and I would just hope that, you know, more people go through this training, more people experience this.
I think the level of compassion and care and help...
...for those who need it...
...especially the elderly, would increase...
...dramatically if more people could experience this.
Thank you for saying that.
It makes this really valuable to us.
Yeah, it is.
It was very valuable to me.
What we found is that caregivers who are better prepared to undertake the role are less likely to suffer negative consequences of the caregiving role and also are able to keep their loved one with dementia living at home longer as well and avoid institutionalization.
It is all about the patient, so what I would say in closing to anybody is listen.
It has the same letters as silent, so that means when you listen, you should be quiet, and so listening and hearing what they really are saying.
'I don't want that.
I don't like that.
Please do that.' Just stop and listen to what they have to say to you.
That's what I would give everyone -- listen.
In the wake of natural disasters and acts of terror, several tech companies are making an effort to improve upon traditional response efforts.
Here to talk about innovation in humanitarian relief is Brian Hecht, our resident serial entrepreneur.
He's an adviser to many startups and digital teams, including our own.
So it's always one of the saddest parts of it is when you start to see people that are interested in helping after a disaster.
They just don't have the right vehicle to get their resources to the people who need them.
It goes far beyond just having the right vehicle.
It's knowing what to give.
It's knowing what are the right organizations to work through.
There are so many ways that technology can help, and it's been lagging a little bit behind, frankly, so this is the next generation of companies that I think are aiming to solve little slices of that.
One of them you want to talk about is NeedsList.
What does it do?
NeedsList, think of it as a wedding registry for humanitarian giving.
So I think we've all seen pictures on the news of when there's an earthquake or a refugee crisis, there's all these crates sitting there that are unused, and it looks like a tragedy, and the problem is very often that the things that are being given are things that the victims don't need.
Likewise, if you are a donor, a lot of people don't like to just write a check to even a reputable organization.
You don't know where it's going.
They prefer to actually give something.
But you to give the right things.
Maybe the refugees don't need candied yams, and so it's incredibly important to have an efficient marketplace that matches the wishes of the donors with the needs of the recipients.
Another one called Guardian Circle?
Guardian Circle, what they say is that they are crowdsourcing security.
So this has international implications for humanitarian relief, but it started with a very personal story.
The founder's girlfriend had what she thought or appeared to be a stroke in the garage, and she couldn't text her friends, so he developed effectively, like, a panic button, which had been around for a long time, but it's different in that it opens up a group chat window with location-based ID for a group of designated people.
So it might be that the neighbor is the best person to help because they're nearby, but the mother doesn't know who the neighbor is, so they can all jump in a group chat.
Now, what does that have to do with global relief?
It's now being deployed in India, where there's a huge problem with women being preyed upon for all sorts of terrible reasons, and this is going to crowdsource the response to a woman in trouble with strangers who are qualified locally but who she may not yet know.
So it might be the guy with an ambulance or, you know, a family of people with a house nearby that can take her in, so it's very, very interesting the way that it's being used, you know, the connection between a personal problem and an international issue.
Another one called One Concern.
One Concern, I have to say, when I spoke to the founder of this company, I was very personally moved.
He was in his native Kashmir, and he noticed that 85 percent of the country was destroyed and displaced by floods, and he's an earthquake engineer, and he decided to put his skills to use when he was back in California.
So this is a system that's being used by San Francisco and LA right now that actually allows you to more efficiently plan both where to reinforce structures and allocate resources when planning for natural disasters and maybe more importantly how to allocate resources once a disaster occurs.
So if you think about an earthquake, let's say, you say, okay, well, there's, you know, crazy confusion, and 911 dispatches to whoever calls, and 911 goes down.
He has algorithms that will help you realize not just where the most lives can be saved, where to send your emergency response vehicles, but things like resilience.
So there might be an underprivileged area that would have particular trouble getting access to food if it's in a food desert, or there might be someplace that has a high concentration of senior citizens, so it identifies socioeconomic factors, not just architecture and civil engineering, which to me is just very, very moving.
Brian Hecht, thanks so much.
Artificial intelligence is changing our lives at lightning-fast speed.
Now, AI is used in everything from the workplace, banking to language.
James Scott, professor of statistics and data science at the University of Texas, is here to discuss how to better understand the modern world of artificial intelligence.
Thanks for joining us.
Ah, thanks for having me.
So it seems to me... My phone is too far away from me.
I'm a creature of habit.
You know, it seems that there are already examples of artificial intelligence in my phone right now.
In a way, it's reading my e-mails and telling me what's coming up or how long traffic is going to be.
There seems to be little bits of AI creeping into our lives that we're not necessarily aware of because we might have a conception of AI as, you know, some big HAL 9000 robot.
Just the other day, when I parked at the gym, I noticed a new feature on my phone that would tell me where my parked car was.
I mean, and we tend not to call that stuff AI.
We just call it an app, right, and I think that the ubiquity of AI is actually something that a lot of folks don't appreciate.
You're exactly right that when people think of AI, they're kind of calling on these science-fiction examples, you know, maybe the cute robots from 'Star Wars,' you know, BB-8 or R2D2, that everybody feels an emotional response for, or maybe they're the evil robots from other science-fiction works, but, you know, AI always seems like something that is in the future, and I think you're absolutely right, that in reality it's in the right here and the right now rather than in the distant future, and it's to all of our benefit to understand the technology a little bit better.
But what should we be paying attention to in how these technologies are rolled out to people today?
I'm pro-nuance and also pro-consent, right?
So I think that if I had to say that there's one thing that people should be aware of about artificial intelligence, other than the fact we just discussed that it's here today and it's happening right now, is its dependence on data, and, you know, people have the notion that an AI robot is... You know, there's a genius programmer behind the scenes that's explaining to this robot how to respond to all possible scenarios, and that's not what it is at all.
The kind of AI, for example, that many folks have at home, an Amazon Echo, you know, we call it Alexa, and we tend to anthropomorphize it, but it's a chain of algorithms, all of which rely very heavily upon data, and that's your data.
So, you know, Alexa, when you ask it, you know, to give you a recipe for spaghetti bolognese, it's getting better at that with every interaction, not just yours but all of the ones across the country and the world with people asking it similar things, and I think that dependence on data, it's crucial for the success of these modern AI systems, but there's also the flip side of the coin, that we should certainly be aware of how and why our data is being used for these purposes.
And, yeah, if I had to have the message out there, data, you know... Really, AI is just probability on big-data steroids, and, you know, you got to be aware of how your data is being used, no question.
Right now, there doesn't seem to be that type of transparency.
If I have a Google Home product or an Alexa device, we're sold the convenience factor.
We're not necessarily sold that, 'Hey, by the way, your information is being taken in as part of the aggregate, but it is technically still your information,' or whether I own all of my search queries going forward or whether Amazon is entitled to have a copy of it because I'm using their device.
Yeah, you know, I think you point to an issue with, say, those user agreements that everybody just sort of clicks through whenever they...
Exactly, and they don't bother to read the 37 pages of legal permissions and constraints and what they're actually signing over.
Like I said, I'm pro-consent, and I think that there's, you know, actually a good movement within AI and machine learning these days coming more from some of the academic side to make those more transparent.
Well, let's also help people understand that there's a difference between kind of narrow artificial intelligence that can do one task really well versus, again, that kind of all-knowing, all-seeing Terminator or Borg that's going to come after us at night.
Yeah, absolutely, and I think that right now, you know, there is nobody on the planet that has any idea how to build any kind of machine with general intelligence in the manner of a human or a Terminator or something like that, and I think that that narrative, actually, and you hear it coming from the Elon Musks of the world and some people that are a little bit more apocalyptic about the future of AI... You know, I'm ultimately optimistic about what AI will bring us, and I do think that there are a lot of concerns that arise from narrow AI.
When I think about those kinds of narrowly tailored applications of artificial intelligence, I do have some concerns, you know, things like jobs.
You know, how can we build a social safety net that's going to be capable of addressing short-term job disruptions?
I think about inequality, you know, how to mitigate the concentration of wealth and power in the hands of large tech firms, and, you know, will the people who own the smartest robots own the future, that sort of thing.
I think about privacy and the things we've talked about, how good AI boils down to finding patterns in data sets about you.
That's your words, your online behavior, your health outcomes and so on.
Well, put this in perspective.
Is the AI evolution/revolution that we're living in now, is it as significant as the industrial revolution was?
Is it our shift from agrarian societies to urban centers?
You know, that's probably a question for a sociology of technology person.
You know, I tend to think that it is.
Generally, how big of a deal is it?
I think it's a very big deal, and here's why.
I think that if you ask what are our capabilities that the industrial revolution radically enhanced, it was our physical capabilities.
You know, you no longer had to be, you know, a person with a hand drill to try to drill through rock.
Instead, you had industrial machinery that would drastically magnify your physical abilities.
And then I think of the technological revolution of the early to mid-20th century that culminated in computers and rockets, and, you know, all those applications of computing technology, those went hand in glove with our own deductive capabilities, you know, just reasoning from conclusions to premises.
All of a sudden, you know, you could add, you know, a billion numbers together at lightning speed that you never would have been able to do just because of the constraints on your own brain.
Well, you know, if you think industrial revolution has our physical capabilities superseded or augmented, the computational revolution has our deductive capabilities augmented, well, the AI revolution is going to augment our inductive capabilities, our ability to see patterns, to learn what kinds of inputs tend to go with what kinds of outputs, and so, you know, those three together, I think, really do make for three fundamental revolutions in the capability of human beings.
Now, in your book, 'AIQ,' you start talking about that we should be familiar with a language to be able to interact.
Give us an example of a lexicon that we should start to familiarize ourself with.
So, you know, you've probably heard the term 'machine learning,' right, and that's a brilliant piece of branding by folks that, you know, sort of brought that, those kinds of methods, into the mainstream.
You know, what is machine learning, and how does that relate to artificial intelligence?
I see a tremendous amount of confusion about that.
I mean, it's hard for me to, you know, drive down the street here in Austin without seeing a billboard for a company that's going to bring machine learning and AI to solve all of your business problems and sell more Cheerios and all of this.
So what's the difference, right?
Here's the analogy I like to draw.
Machine learning is like the internal combustion engine.
It's a general-purpose technology that can be dropped into a lawn mower.
It could be dropped into a car.
It could be dropped into a prop plane, whereas artificial intelligence, that's the whole car, right?
That's the whole set of interacting systems of which the internal combustion engine is just one part.
James Scott, author of 'AIQ,' thanks so much for joining us.
Thank you, Hari.
I appreciate it.
A group at Pennsylvania State University in State College, Pennsylvania, is trying to build an autonomous system that can operate underwater.
This unforgiving environment is pushing researching into new territory.
Here's the story.
[ Indistinct conversations ] [ Birds calling ]
Come on up here.
So that's a no-go?
So I have...
[ Beeping ]
[ Humming ] [ Engine roaring ]
By now, you've probably guessed that this is no ordinary boat trip, and this is no toy.
It's called an Iver, an autonomous underwater vehicle, but members of the Applied Research Lab at Penn State aren't really working on the vehicle.
They're more concerned with its mind.
So we're going to have it drive on the surface initially at a 50-degree heading, and when it picks up speed, then we're going to start diving.
It's going to hold a 7-degree downward pitch until it reaches its target depth, and then it's going to level out and try to maintain that depth.
The team, led by John Sustersic, wants to build a self-driving submarine, but this project is a little more complicated than that.
We're trying to develop an autonomous system, an unmanned system, that can interact with people as well as a manned system would, and so the number of, you know, technical and theoretical challenges that have to be overcome there is pretty big.
The system is called MANTA, Multiagent Architecture for Natural and Trusted Autonomy.
Let's start with the multiagent part first.
It's built to mimic the command structure of an actual submarine.
There's a commanding officer, a navigator, officer of the deck, a pilot and an engineer, and these autonomous agents or computer programs use the same language that you would hear on a sub, so when the officer of the deck tells the pilot to dive, it uses the word 'dive.'
By allowing the agent to interact via natural language, then an outsider observing the operation of the system will understand the high-level decision-making that's in play, and, really, we can have humans intermixed with the autonomous agents.
So you could have a human commanding officer issuing orders to the rest of the autonomous team.
That's the ultimate goal -- a trusted autonomous system that can operate without any humans or just a few.
To build it, Sustersic built a diverse team, one that includes engineers, programmers and a mathematician.
Most of what I'm looking at has to do with the learning aspect.
We have right now an entire architecture devoted to logic, right, but if you really think about what makes an organism robust in a situation, one of the most important components is that it's adaptable, and that adaptability requires learning.
What I'm thinking about is how do we take in sort of the raw data of the world, and how do we learn from it so that we have a platform that can adjust to novel circumstances?
It's a problem Sustersic compares to teaching a teenager how to drive.
Safety comes with experience.
Our belief is that by enabling learning in that constrained way, then the system will become better at doing what it does, and so we'll be able to relax the constraints and give it more decision authority over time, and which way we keep everyone happy.
At the lake, the team sets clear constraints and objectives before any mission begins.
We're going to come back up, get a GPS fix, come back around parallel, reverse course and take another swath of imagery.
The results can be checked and tweaked on-site.
They can also be duplicated in the lab using a popular video-game engine.
So this is our MANTA simulator.
It is actually a two-part simulator.
All we do is put it in the environment, and we have the Soar agents command what our vehicle is doing, and we can see real-time what it looks like, and we also have a visualizer, which takes in data from physical tests, like the ones out at the lake, and we can put that data in here and actually replay the mission that took place out there on a screen.
This continual testing is vital to developing agents capable of completing complex tasks and building the trust needed for human interaction.
So many of the challenges that the autonomous system has to face -- 'Is it safe to go here?
How much risk do I assume if I take this particular path versus that particular path?'
-- Those are the things that people are doing all the time and things that this system needs to demonstrate to people's satisfaction that it will make reasonable decisions.
When you think about computers, they are part of a long line of technology that's designed to do all the things that humans are bad at.
They're designed to record information perfectly.
They're designed to compute very quickly.
But the current age of technology is interested in doing the things that humans are good at.
It's sort of working with fuzzy information.
It's transferring knowledge from one domain to another.
And when it comes to that transfer, the team at ARL is just beginning to scratch the surface.
And that wraps it up for this time.
For more on science, technology and innovation, visit our website, check us out on Facebook and Instagram and join the conversation on Twitter.
You can also subscribe to our YouTube channel.
Until next time, I'm Hari Sreenivasan.
Thanks for watching.