Keyboard App Empowers Disabled

Technology has the ability to transform the lives of people with disabilities. One example is SwiftKey, an app for keyboards that helps non-verbal users communicate. Joe Osborn, Senior Engineering Manager at SwiftKey joins Hari Sreenivasan to discuss.

TRANSCRIPT

Technology has the ability to transform the lives of people with disabilities.

One example is SwiftKey, an app for keyboards that helps non-verbal users communicate.

Joe Osborne, Senior Engineering Manager at SwiftKey, joins us to discuss.

First of all, what is SwiftKey, for people who don't know what it is?

So, SwiftKey is probably best known for our keyboard applications.

Best known on Android and iOS devices.

And the technology that sits underneath SwiftKey keyboard also powers a number of our other products.

For example, our product SwiftKey Symbols, which you've mentioned, which is seeking to assist those who perhaps can't use a traditional keyboard, or perhaps don't use language in a traditional sense, to still be able to communicate in a very effective way, to empower them to really communicate with those around them.

Is this just about predicting what your next word is?

Does that shortcut the process of how long it takes to input?

So, this is where it gets very interesting.

So, to step back a little bit, one of the ideas we kind of look at SwiftKey is -- kind of two big things we think about.

One is that technology should always be adapting to the user, and not the other way around.

And secondly that, particularly when it comes to communication, the real input to the system is your intent.

These either messy keystrokes or eye tracking, all these noisy inputs are just a poor proxy for what your intent actually is.

And it's our job to reconcile what your intent was from your messy, incomplete, incorrect input.

That does include an aspect of prediction, but it also is about, in a sense, correction, to label it very, very broadly.

And to do this, we try and understand as much of the text-input process as possible and model it mathematically.

Everything from the physical inputs we see you making to the kind of lexical and typographic errors that you tend to make as an individual, and then all the way down to the language and the linguistics that you use.

So...

How do you learn all that?

Ah, well -- well... As we said, technology should always adapt to the user.

So by observing how the user is using our technology, and the way they're using language, we can build up, on your own device, an understanding of you, how you use your language.

Whether you're multilingual, whether you're monolingual, whether you mix your languages, and what your biases are in your linguistics.

But also the kinds of errors and the error characteristics that you have when you're trying to communicate.

So it's getting smarter the more I type.

Absolutely. Absolutely.

And it would be easy to think about prediction and correction as two separate ideas.

But, really, almost everything that we do is a combination of prediction and correction.

The better you can predict, the better you can imagine where someone is trying to do, what their intent is, the better you can correct for them, as well.

So when I type H-A-R-I into my phone, you won't auto-correct for H-A-I-R?

That's right.

As we learn and observe the way you use and the words you use and when and where you use them, that's exactly right, yeah.

And you also worked with Stephen Hawking.

Tell me a little bit about that.

That's right. So... Perhaps a little bit of history, if that's of interest.

Several years ago, Intel, who had been the provider of Stephen Hawking's whole, complete technology, you know, his whole system, were seeking to do a bit of a revamp of his system.

And they made lots of improvements about the way he could navigate through this applications and things like that, which were great time savers.

But what was clear to them was that is communication rate, the rate at which he could actually enter text, was not what it used to be, and they were looking for ways to improve that.

And they were trying various existing solutions, and they just weren't holding up to scratch.

So we got in touch with Intel, and we started to work with them.

And so we spent quite a lot of time with Hawking and his team, observing both how he uses his system and the nature of the system itself.

Obviously we didn't have all the details about how his system worked, initially.

Understanding where his pain points were in terms of communication, in terms of getting text into his system.

But also observing the issues that he was less aware of, and what the characteristics of his system were, and where the weaknesses were -- for example, in the sensor he uses to actually input.

And, of course, one would expect the error characteristics that he has and that he experiences are very different from what you and I would experience hammering with two thumbs on our phones.

So he will have very different error characteristics.

The utility of what we are trying to do on his behalf is also very different.

The cost of him making an error, or the cost of us making an error on his behalf, are far higher, for example, than they are on our device.

And so building technology which can adapt and exist in both ends of this spectrum, if you can do that well, you're probably doing a good thing.

So, what are other examples of how this technology is being applied?

What have you learned from the Stephen Hawkings of the world?

What have you learned from people like me using the app on the phone?

So, one of the key things we learned from Hawking, of course, is just what a lifeline communication is for people who don't have another means of communication.

So it's very important for you to do your job right.

And that has a number of consequences.

One is that, even by making a small improvement, you can actually radically change someone's life.

The second is that you have to really respect the position you are holding in that life, and not play with it.

It's a very important position to be in.

And so you have to be very aware of that, and respect the role that your technology is playing.

Coming up to the kind of users like you and I who hammer at a very fast rate on our phones, there's so much to learn there.

Everything from the rate with which language is evolving, and the rate with which different languages are evolving.

Just how different each person's view of a given language is.

There is no real idea of what English is.

Everyone has their different blend.

Everyone has their different colloquialisms.

Everyone uses it slightly differently.

And, as we said earlier, given that we believe technology should adapt to you, we shouldn't enforce on you what our idea of English is.

We should allow our tech to get to know your version of English, to help you do what you're doing and say what you're saying better.

You know, culturally -- for example, I text with people who use a form of Hinglish -- that's a mix of Hindi and English.

Yes, indeed.

There's quite a few modifications of words.

Then there's almost what I would consider, like, 10- to 12-year-old English, which has tons of truncations and incredible shorthand and 'IRL' and whatever.

They're just speaking in a way where they've said a whole sentence, and they've only used eight letters.

Yeah.

And, well, that's kind of impressive when you think about the fact that -- But it is an evolution of the language.

You and I don't speak in Shakespeare's English or the Queen's English.

Not every day. That's right.

No, and that's true.

And we have had discussions, and people have asked us in the past about our role as a communication technology.

Should we be involved in trying to redirect people's language towards something 'proper'? I think the answer is no.

I mean, by definition, this is their language.

Are emojis gonna replace --

That's a very good question.

...the A-to-Z alphabet?

[ Laughs ]

Who knows?

Watch this space.

It is very interesting watching the evolution of things like that, emoji, and now with things like GIFs and stickers.

Yeah.

They clearly have semantic content.

They are kind of linguistic elements.

Do they form part of a language in this way or not?

It's gonna be a very interesting area to watch.

Emoji are a very interesting example of that, yeah.

All right.

Joe Osborne from SwiftKey.

Thanks so much.