Technology is advancing more than ever, and part of that includes voice technology. “Smart” technology has been implemented into many aspects of every day life — cars, TVs, cell phones, even the lights, fans, and air conditioning in people’s homes — and with that comes the ability to turn something on or off using just your voice. But voice assistants often can’t understand people with Down syndrome. Now, Google is partnering with the Canadian Down Syndrome Society to change that.
Project Understood has asked for adults with Down syndrome to participate by recording phrases and donating them, therefore improving voice technology to better understand the unique speaking patterns of people with Down syndrome. “With the help of the Canadian Down Syndrome Society we were able to sample a small group to test whether there were enough patterns in the speech of people with Down syndrome for our algorithm to learn and adapt,” Julie Cattiau, a Google project manager, told the Disability Scoop. “It’s exciting to see the success of that test and move into the next phase of collecting voice samples that represent the vocal diversity of the community. The more people who participate, the more likely Google will be able to eventually improve speech recognition for everyone.”
READ: People lie about children with Down syndrome, like my son, to encourage abortion. Here’s the truth.
Due to low muscle tone and differences in facial structure, people with Down syndrome often need speech therapy as children, and as adults, still have different speech patterns than the typical population. This means voice assistants like Siri and Alexa miss approximately every third word said by people with Down syndrome, according to Disability Scoop. The reason is simply because no one has yet worked to get the data needed — which is what Project Understood is looking to change. And as Project Understood explained, this is especially disheartening, as people with disabilities are often the people who can benefit the most from voice technology.
Voice interfaces have now been sold in millions of products ranging from smartphones, to vehicles, to home devices. These systems offer endless possibilities for enhanced living. But as it currently stands, the technology is not optimized for use by people who would benefit from it the most: people with disabilities. Automatic Speech Recognition (ASR) can greatly improve the ability of those with speech impairments to interact with everyday smart devices and facilitate more independent living. However, these systems have predominantly been trained on ‘typical speech’. But not all human speech is the same.
“For most people, voice technology simply makes life a little easier,” Laura LaChance of the Canadian Down Syndrome Society told Disability Scoop. “For people with Down syndrome, it has the potential for creating greater independence. From daily reminders to keeping in contact with loved ones and accessing directions, voice technology can help facilitate infinite access to tools and learnings that could lead to enriched lives.”
People with Down syndrome are still thought of by many people to be incapable of leading independent lives. Yet with advances in science and medicine, they are living longer, healthier lives, and are able to accomplish more than ever before. The issue is not that people with Down syndrome are not capable; it’s that there are barriers before them that the typical population does not face. With initiatives like Project Understood, those barriers are a little bit closer to being knocked down — giving people with Down syndrome even more chances to lead the kind of lives most people take for granted.
“Like” Live Action News on Facebook for more pro-life news and commentary!