Mapping the sounds of New York City

From construction workers drilling to sirens blazing, there is seemingly no escape from the cacophony of New York City streets. But now researchers from New York University and Ohio University are teaming up to curb these urban irritants with an initiative they’re calling “Sounds of New York City.” Lead Investigator Juan Pablo Bello joins Hari Sreenivasan.

TRANSCRIPT

From construction workers drilling to sirens blazing, there is seemingly no escape from the cacophony of New York City streets.

But now, researchers from New York University and Ohio University are teaming up to curb these urban irritants with an initiative they're calling Sounds of New York City.

Joining us to talk about this initiative is lead investigator Juan Pablo Bello.

You're literally trying to listen to everything in the city.

Sure. Yeah.

We're trying to understand, you know, what is the composition of sources in the environment of the city so that we can direct mitigation efforts in a more effective way.

Also to understand, really, you know, what is the behavior of noise at larger scale, right?

So, you have specific sources being activated in a specific location in space and time.

And I think, right now, we don't really have the data to understand, you know, how that influences the way that we go about mitigating noise, the way that we go about deploying city resources to mitigate noise, and, more importantly, how we can incur self-regulation from, you know, this type of information.

So, right now, you're trying to build a map of New York City, but one we can hear?

Yeah.

I mean, well, I think, at the first level, it's one that we can visualize, so one that will give you really good information about, you know, noisy areas in the city, but also, you know, that this area is noisy with emergency-vehicle sounds.

This area is noisy with, you know, like, street-party noise.

Or if you live around a square, you know, what type of events, you know, change the behavior of the acoustic environment around your location?

So that's the type of information that we're hoping people will have more accessible through these maps.

How do you capture this data?

New York's a big city.

It's got lots of different neighborhoods, lots of different sounds.

So, it's a combination of things.

So, primarily, we're deploying a large-scale sensor network, acoustic sensor network, which idea is to monitor the acoustic environment 24/7.

But most importantly, it has advanced technologies in the sensor in order to identify automatically the type of sources that contribute to the environment in those locations, right?

So you get a 24/7 monitoring of jackhammers versus idling engines versus, you know, like, the sound that trucks make when they're, you know, backtracking.

So, you know, so part of the idea is to have that identification inside in a way that you can really start to collect patterns at a larger scale, you know, as source-based patterns.

How do you see those patterns?

I mean, you don't have an army of humans sitting there with headphones on, listening to every one of these microphones.

No, and in this part of a project, you know, really the idea is that we're going to leverage that noise from data science -- so, large-scale data mining, things like computational topology that allow you to look for abnormalities in large data sets.

So, just to try to understand, you know, what is regular in terms of acoustic behavior, but also, more importantly, what is irregular, and what could be attached to certain, specific events in the city.

So, are these sensors picking up, well, for you, noise, but, for us, conversations?

Well, no.

The sensors are more concerned, and we are more concerned with source type.

So the idea is that the sensor will tell you there is speech here.

It might even be able to tell you whether it's male or female speech in there, or shouting, but we're not interested on conversational account, and nor we have the technology, really, to extract conversational content in noisy environments like the city.

I mean, like, the thing to understand here is that, in New York City, you have hundreds of sources active in any given location, right?

So in these various situations, it's very difficult to get something like phonemes or words.

And, furthermore, you know, we're taking very specific steps to try to ensure there is protections in terms of privacy for the people in these locations.

So, you know, right now, though we're collecting audio data, we're only recording 10-second snippets in these locations.

And they are collocated in time, right, so we don't have contiguous events that we can identify as a longer conversation.

So, let's say you build this map that you can see and hear on how New York sounds and where.

What does a policy maker do with this information?

So, right now, like, if you take one agency, for example, which is tasked with mitigating noise, which is the Department of Environmental Protection in New York City, and you have something like 15 inspectors to cover the entirety of New York City in terms of noise mitigation, right?

So these are, you know, highly-qualified individuals, but there are only so many of them, and I think it's unrealistic to expect to the city to try to hire, you know, thousands of people to be able to cover a city of this, you know, scale.

So, the technology at that first level will be able to better direct the efforts of these experts, right?

So just be able to use the kind of, like, operations research technology that is currently used, you know, for your delivery trucks, you know, like, to scale, you know, your Amazon or, like, FreshDirect delivery.

You know, you could use the same sort of intelligent allocation of resources to be able to locate inspectors in time and space in ways that maximize their impact.

And then, afterwards, you could try to maybe say, 'This is a residential zone.

Let's not put this hospital right next to it.'

Or how would you redesign a city keeping noise in mind?

Yeah.

I mean, I think this is, of course, one of many threats which are currently going on in the city, in particular, but, you know, around the world in sense in different aspects of city behavior.

So we definitely think that noise could be one of those components, you know, that will hopefully inform city planning going forward, and the way that people distribute zoning, especially, like, mixed zoning areas, you know, which is something that the city is very keen to try to roll into, but right now it's very hard.

You know, it doesn't have necessarily the data to do this effectively.

So, you know, we think of this as one data stream that can contribute to do, you know, better mixed zoning, you know, to better allocate, you know, like, different types of activities across the geographical area of the city.

All right.

Juan Bello from NYU.

Thanks so much.

Thank you so much.