SciTech Now Episode 418

In this episode of SciTech Now, we see how one community is getting ready for climate change and urban renewal; a look into how signals in our bodies indicate the onset of diseases; the future of space exploration; and a mobile app changing the parking game by allowing drivers to park with a scannable code.

TRANSCRIPT

[ Theme music plays ] ♪♪

Coming up, climate change and urban renewal...

A bioretention garden is a form of green infrastructure.

...medical code-breaking...

What bioelectronic medicine is trying to do is develop these devices that will listen in on this communication between our brain and our organs and try to diagnose diseases.

...a lunar shelter for astronauts...

The whole rationale for thinking about living in caves on the moon is, in fact, the hazards.

...parking made easier with your phone.

We saw the massive traffic issue, and we just kind of said, 'Well, this is a problem that's solvable through technology.'

It's all ahead.

Funding for this program is made possible by...

Hello. I'm Hari Sreenivasan.

Welcome to 'SciTech Now,' our weekly program bringing you the latest breakthroughs in science, technology, and innovation.

Let's get started.

Flooding and severe storms are one of the biggest worries for municipalities throughout the U.S. as they prepare for the effects of climate change.

In Detroit, getting ready for changing weather patterns has prompted a creative, new way to use the city's vacant lots left from demolished homes.

This segment is part of an ongoing public-media reporting initiative called 'Peril and Promise,' telling the human stories and solutions of climate change.

Across the country, cities are tasked with managing rainwater that falls on impenetrable surfaces -- a problem that's getting worse due to the effects of climate change.

The Clean Water Act mandates that municipalities keep runoff clean and manage overflows.

In older cities in the Great Lakes Basin, like Detroit, this is a serious challenge.

The strength of the storms is definitely different.

What that means is that there's more precipitation falling in a shorter amount of time.

In August of 2014, a storm dropped 5 1/2 inches of rainfall on Detroit, and the sewers discharged 10 billion gallons of overflows into local rivers and streams, as well as the basements of Detroiters.

My basement, a couple years ago, had like 8 inches of standing water in it after one particular storm.

I saw some pretty -- I'm talking about critical situations where people done work hard, where they property just got damaged during the floods.

Over the last 20 years, Detroiters invested over $1 billion to mitigate 95% of overflow.

To account for the last 5%, the city turned to green infrastructure.

The Urban Waterfront compared with the River State Park, each of them shown in phases.

Joan Nassauer, professor of landscape architecture at the University of Michigan, thought up a unique solution.

A bioretention garden is a form of green infrastructure.

Each of the bioretention gardens can hold up to 300,000 gallons of water.

Detroit had a very large number of vacant properties that I could hardly get my mind around.

It was so clear to me that there was enormous potential for managing stormwater if these vacant properties in Detroit could be used in the right way.

We said, 'What would happen if we filled those basement excavations, as part of demolition, with a highly porous material?

Researchers chose Northwest Detroit to host the bioretention gardens.

The minute I found out what its purpose was, I was all in.

I mean, there is nothing more important than making sure that families are safe, because there's a lot of bacteria that got into everybody's basements.

We were one of those people that had 3 foot of sewage in our basement.

There's health decisions to be made around stormwater management, in terms of the spread of infectious disease.

And so public health kind of gets left off sometimes.

In 2015, we completed our first survey around the sites, about 800 square feet.

They asked us how would we feel about a garden getting put across the street, and we told them it would be a lovely idea, because it was just naked.

They gave the field life.

Since we put the garden in, we actually have children that play out there now, riding their bikes.

We have people walking by the garden now.

People are getting back out into the community.

They're not tied in their house anymore, with their blinds pulled.

They're actually opening their blinds and enjoying life again.

And we want neighborhoods in Detroit to be beautiful places to live.

But there's another reason that's more of a tactic for sustainability as it relates to water quality.

If you pair an ecological benefit, like being able to hold stormwater, with beauty on the surface, then the ecological benefit is more likely to be sustained over time.

In addition to the bioretention gardens, the city of Detroit is investing in a multitude of green-infrastructure projects.

We know that climate change is already happening.

Since 1951 to 2015, we've seen a 4.5% increase in total annual precipitation across the state of Michigan.

When you're maybe economically insecure, and your basement floods several times a year, that can be a really stressful situation, and so this green infrastructure might be one strategy to take the burden off of residents who are experiencing that.

[ Theme music plays ] [ Computer keys clacking ] ♪♪

With scientific advancements, researchers now have a better understanding of the signals in our bodies that indicate the onset of diseases.

Joining us to discuss the future implications of this groundbreaking technology is Theodoros Zanos, head of the Neural Decoding and Data Analytics Lab at the Center for Bioelectronic Medicine at the Feinstein Institute for Medical Research in New York.

What is bioelectronic medicine?

Hari, thank you for having me.

Bioelectronic medicine is a new field in medicine that tries to use technology to treat and diagnose all kinds of different diseases and conditions.

We know for a fact that most of our organs are innervated and connect to our brain and communicate.

Mm-hmm.

And they communicate their function and whether something goes wrong.

So, what bioelectronic medicine is trying to do is develop these devices that will listen in on this communication between our brain and our organs, and try to diagnose diseases early, before symptoms arise, and actually treat them by stimulating the nerves or blocking a specific signal.

So, do we -- Have we figured out the language that the organs are using?

So, that's what we're trying to do.

The language of the nervous system is something that we're trying to figure out.

So, one of the things that we're trying to do is listen in on this back-and-forth.

And we do that by placing electrodes, which are microphones, very close to nerves or inside the brain.

And what that enables us is to listen in on these very tiny electrical signals that are used from the brain and the neurons to actually communicate the function of specific organs and also tell the organs what they should be doing, how they should function.

So, one of the key components of these devices is to take that language and try to decipher it into something that we understand and care about.

And that's what we're trying to do in my lab and in the center in general.

So, what sort of diseases are we talking about?

What are the things that are more likely to be able to be visible?

So, what we're focusing right now are diseases like Crohn's disease or rheumatoid arthritis, which are autoimmune diseases.

And previous work at the Feinstein Institute by our C.E.O., Dr. Kevin Tracey, has been instrumental in identifying the role of the neural system in regulating these diseases and, in general, our inflammatory reflexes.

So we're focusing on diseases that are related with our immune system but also our metabolic system, such as diabetes.

And then we also look at the other chronic diseases, like paralysis, as well.

So, what are you able to see now?

Say, for example, if someone is heading into diabetes, what is their body doing?

Is their pancreas telling their brain things all the time, that you're basically starting to pick up on?

Mm-hmm. So, that's really what we're trying to do.

We're trying to pick up on the signals that the pancreas deliver, are sending to the brain, through their sensors.

So, there are sensors that constantly monitor our state, our homeostasis, as we say.

Mm-hmm.

And that really is the language that we are trying to pick up the signals.

We're trying to eavesdrop on this back and forth.

And if used, for instance, in the case of diabetes, whether someone is getting into hyperglycemic or hypoglycemic state, and then turn around and use the same device and the same electrode that listens in on this activity, to actually start stimulating the nerves.

And by stimulating the nerves, we've seen before that we can alter, in the case, again, of diabetes, blood glucose levels.

So, in essence, a closed-loop device that will listen in, identify the problem, and then intervene and start stimulating without the patient even having to do anything or even experiencing any symptoms.

So, how would this device get into my body to be able to listen to these signals?

So, at this stage, we're mainly working with devices that would require implantation, so a surgery.

The main nerve that we're working on right now, which is called the vagus nerve -- it innervates the majority of our peripheral organs -- is located in the neck, so it's quite easily accessible for neurosurgeons to put a sensor there.

Right now, there are surgeries that are happening that place sensors on the vagus nerve in patients with epilepsy, and that takes around 45 minutes.

So it's not a hugely invasive procedure, but it is invasive.

However, we're looking at noninvasive solutions, as well.

We try to stimulate and record from the nerves without actually having to do the surgery.

Well, could this become the sort of pacemakers of the future, where we would actually have implantable devices that are measuring certain types of organs, certain types of signals, and then change those or modify those, based on the needs?

Mm-hmm, that's exactly the idea.

The idea is to -- The pacemaker, you could say, was the predecessor in all of these.

The idea is to have a device that will listen on this activity.

And it doesn't necessarily have to be tailored for one specific disease.

Right now, we are working, targeting specific diseases.

But in the far future, what we are looking to doing is developing these devices so that they will try to tackle all kinds of different diseases that start from a breakdown or a malfunction in the neural system.

When pacemakers were invented, we didn't have at least the advancements in machine learning and algorithms that we have today.

So, what can you take from all of the information that's out there, all the technologies, and how do you make this new kind of pacemaker smarter?

So, that's exactly what my lab is trying to do.

We are trying to use machine-learning algorithms that are out there right now, and they keep on evolving.

And we're trying to use the same ideas that big companies, like Google or Amazon, use to try to understand what we're saying to their devices, decipher our voices, and translate them into actions that the device will need to make.

Kind of the same idea, we try to apply it to listening in on the nervous system, translating that language into actions, and then having a device that is also adaptable.

So it changes according to the different state of the organism and also other problems that might arise.

So it becomes smarter as it's inside the body.

Learns from the body but also teaches the body how to heal itself, and really converts neural signals into what we call health signals.

How far are we from seeing devices that use these kind of technologies?

So, right now, there are several small clinical studies that are carried out at the Feinstein Institute, as well as in Europe and other places in the world.

But, however, the devices that would enable such things would need several years from now to be deployed in the market.

However, we feel comfortable that the first iterations on specific conditions should work, and they are being test right now in clinical trials.

And as I said before, the goal is to make a device that's not tailored to one specific disease but a device that will understand that something is going wrong anytime of the day for a lot of different diseases and try to intervene.

All right.

Theodoros Zanos, thanks so much for joining us.

Thank you for having me.

A city-size lava tube has been discovered on the moon.

This tube could provide shelter for astronauts and could, potentially, allow them to live on the moon.

Professor in the Department of Earth, Atmospheric, and Planetary Sciences at Purdue University, Jay Melosh, joins us via Google Hangouts to discuss the future of space exploration.

All right, this is fascinating to me.

First of all, how do we know that there's a tube inside the moon?

Well, we're not entirely certain.

The idea that there are lava tubes came from an experiment that NASA ran in 2012.

We had two satellites in orbit around the moon, very, very low, and we measured the distance between them with very high precision.

The idea behind that experiment was to measure the attraction of gravity in the moon's surface.

What we discovered is that, underneath the vast lava plains on the moon's surface, there were kind of sinuous or river-shaped deficits of mass, so that the attraction of gravity wasn't as big in these kind of sweeping, curved riverlike valleys.

We know of features like that on the Earth, much smaller.

They're called lava tubes.

And what they are, are feeder channels for the lava flows that ran empty.

And as the lava froze, they remained as tunnels.

We see those on the surface of the Earth, too.

But on Earth, they're not that big.

They're maybe 50 or 100 feet across, at most.

On the moon, they seem to be almost a mile in diameter.

Wow.

There are two reasons for that.

One, of course, is the lower gravity of the moon, so you can support a bigger cave without collapsing.

But the other reason is that the eruption rate of lunar lava is much higher than that of the Earth.

We know that because we see open feeder channels, and they're vastly bigger than anything on Earth.

So, when the lava came up to the surface of the moon -- and that was about 4 billion years ago -- the lavas have had plenty of time to solidify, to cool off.

But when that lava came up, if flooded the near side of the moon, and the channels that fed the big flows then ran empty after the lava stopped welling up from the moon.

And as they emptied out, the surface at the top of these channels was able to freeze solid and to maintain a big tunnel.

So, how do we know the size of these tunnels?

You said that you can kind of extrapolate from the valleys that are on the surface, and we can say that they're probably underground, as well.

And I know you've done some math on the gravity.

But how do we know that... Let's say, it might be dome-shaped or it might be, you know, in a different shape in a different area.

And what kind of holes are kind of poking in and leading to it?

Well, you're exactly right.

We don't know the shape.

We do know the volume, though, because we know how much mass is missing, and that's a huge amount of material.

So it could be dome-shaped.

They could be round.

They could be almost any shape.

And we don't also know the depth below the surface.

For that reason, we really need to do some kind of a ground-penetrating radar.

And the thing that happened after we had announced the finding of these tubes is that a group with the Japanese Kaguya mission, which had a radar like that aboard, reanalyzed the data with a new point of view, realizing that they might be looking for caves, and they, indeed, found them.

One of our candidates and one of their data tracks cross each other.

They looked at the data.

They had to reanalyze it.

They did that, and they found reflections from both the top and the bottom of the tube, which tells us that it's at least, oh, about 150 feet between the top and the bottom.

That's a little smaller than what we inferred from the gravity deficit, so they may be picking up one of the smaller tubes.

At the moment, we don't know.

We really need to go back to the moon with a designed radar, that is intended to look at these things, to find exactly their depth below the surface and their shape.

So, if we have a station on the moon, is it more likely that it would be inside one of these tubes, versus on the surface, given that there's no atmosphere there and the astronauts are probably in danger of the kind of radiation of the sun?

Well, the whole rationale for thinking about living in caves on the moon is, in fact, the hazards.

The big one is radiation.

But, in addition, the temperature on the surface of the moon ranges from about minus-220 degrees to plus-230 or 240 degrees during the lunar day-to-night cycle.

So that kind of temperature variation can cause real problems.

And there are other hazards that lava caves would really allow us to get away from.

It's maybe a little inconvenient to get into these caves, but we know that there are several entrances on the surface of the moon.

They're called skylights.

That name derives from similar features on the lava flows on the Earth.

But we know that there are openings probably created by ancient meteorite impacts on the top of the tube.

But we could imagine lowering astronauts or supplies into there with a bunch of wire networks -- we're very familiar with doing that, from terrestrial mining operations -- and get the astronauts away from the severe surface hazards, particularly the radiation.

All right, Jay Melosh from Purdue University, thanks so much for your time.

It's a pleasure.

[ Computer keys clacking ] [ Theme music plays ] ♪♪

A mobile app is changing the parking game.

The technology allows drivers to find and prepay for parking with ease, using a scannable code.

Here's the story.

Finding a parking space downtown or in a densely populated area can be a challenge.

But a San Antonio-based company has created an app to make parking easier.

We've got roots right here in San Antonio.

We launched out of the Geekdom.

But our first market was Mexico City, where we saw that there were 4 million on the road every day and a massive parking problem that's a worldwide issue.

So we decided to build a network-effect app for parking, that solves it from the user's perspective.

And we had a little early success there and brought it back to the States, and now we're expanding here.

Mexico City served as a challenging proving ground for the Arriv.io app.

We saw the massive traffic issue, and we just kind of said, 'Well, you know, this is a problem that's solvable through technology.'

And we hadn't really seen anything at scale anywhere else in the world, tackling that problem.

There are other people, obviously, trying to do it, because it's a worldwide problem.

But what we figured in Mexico is, you know, 'There's 4 million cars on the road there.

Let's look at the commuter patterns and start trying to enable parking sites based on where people are moving across town.'

And that let us get a very fast scale-up and test, you know, that thesis.

And then we started looking at, 'In the U.S., does this make sense?

Where do people move from?'

So, like, if you're in San Antonio, you know you've got your different pockets of town, and people are commuting within those pockets, maybe for business meetings and whatnot, and they're also coming from other markets.

So, when you're going to Austin, you want to know that you've got a place in Austin.

And so you've got to build this network that really follows a user's mobility pattern.

And that's what we're after and solving for.

How does the Arriv.io app work?

It's a seamless process.

Whether it's street parking or garage parking or valet parking, you just click 'Check In' in the app, and the magic works in the background.

And whatever needs to happen, it's seamless to the user.

So you don't have to go and, like, try to make a reservation online or something like that.

It's got nothing to do with that.

It's, 'I'm here, I need to park.'

Arriv.io will show you where the nearest parking site is that's enabled with Arriv.io.

And when you get there, just by clicking 'Check In,' the app will do whatever you need to do.

And then, when you go to leave, you click 'Check Out,' and it does whatever needs to happen on the back end to make the magic work so you can leave.

And your payment is seamless inside the app.

Reyes believes that Arriv.io will have a positive impact on all traffic, even for drivers who aren't using the app.

In metropolitan areas, at least 20 to 30 minutes per trip, on average, is spent looking for parking.

So if I can take you directly to the parking site, get you off the street as quickly as possible, and get you parked as quickly as possible, with Arriv.io, that's a matter of seconds, as opposed to 20 minutes looking around, driving around the block.

So that could have a serious impact.

At scale, that could have a serious impact in traffic in downtowns, right?

David Sandoval, the senior chief engineer at the Weston Centre is downtown San Antonio, describes the ease of adding Arriv.io to his parking garage.

Our end was actually very painless.

Really, it's us joining Arriv.io's network and putting our parking structure into their system.

At that point, that's really all our part.

We're done.

Arriv.io takes care of everything else.

Our current systems don't have to be integrated.

We don't have to have cost expenditure associated with that.

And we're able to bring that technology instantaneously to our building.

Let's test-drive the Arriv.io app and see how it performs.

Okay, so, I'm pulling in to park at the Weston Centre.

Coming up to the gate.

And right before the gate now, there is a QR code from Arriv.io.

There it is, right there.

So, you pull up your Arriv.io app and you press the 'Park' button.

So, I'm pressing 'Park.'

And there's the camera.

You aim your camera, from your phone, at the QR code.

And -- boom -- you're in.

The gate goes up, and you're ready to park.

Didn't have to roll the window down, didn't have to hand anybody any money.

It's a lot easier.

One of our advisers was one of the founders of Uber.

And I love the quote that he said, because when you talk about the potential in the problem that you're solving, and the scale that you can solve that problem, he said, 'Uber gets you from Point 'A' to Point 'B,' but Arriv.io solves the last mile.'

And think about what that means, because it's your journey, on your terms.

You're back in control.

You can do all kinds of smart things with that.

We've even started selling coffee in some of the parking sites, that you can park and have coffee waiting for you there.

[ Chuckles ]

You know, so there's crazy stuff that you can do that's really cool.

So, now we're pulling up to the exit gate at the Weston Centre in Downtown San Antonio.

You pull up to the gate, you see the QR code for Arriv.io, you pull up the app, and this time, it's 'Check Out.'

So I'm gonna press 'Check Out.'

It tells you your price.

And when you click 'Check Out,' your camera is going to appear.

There's the camera.

Aim it at the QR code.

Snap, and the gate will magically open for you.

We are free.

We are free to drive amongst ourselves.

But before you exit the garage, remember, always do the safe thing -- put your phone away.

Hands-free.

No texting and driving.

Thank you.

And that wraps it up for this time.

For more on science, technology, and innovation, visit our website, check us out on Facebook and Instagram, and join the conversation on Twitter.

You can also subscribe to our YouTube channel.

Until next time, I'm Hari Sreenivasan.

Thanks for watching.

Funding for this program is made possible by... [ Theme music plays ] ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪