Sanas aims to convert one accent to another in real time for smoother customer service calls

0

In the customer service industry, your accent determines many aspects of your job. It shouldn’t be the case that there is a “better” or “worse” accent, but in today’s global economy (although who knows about tomorrow’s) sounding American or British is valuable. While many are undergoing accent neutralization training, Sanas is a startup with a different approach (and a $ 5.5 million startup round): using speech recognition and synthesis to change the speaker’s accent in near real time.

The company has trained a machine learning algorithm to recognize a person’s language quickly and locally (i.e. without using the cloud) on the one hand and to output the same words with an accent selected from a list or an accent that is automatically recognized from the list of another person’s speech on the other.

Screenshot of the Sanas desktop application.

Credit: Sanas.ai

It fits right into the operating system’s sound stack, making it ready to use with pretty much any audio or video calling tool. The company is currently running a pilot program with thousands of employees in locations from the US and UK to the Philippines, India, Latin America and others. American, Spanish, British, Indian, Filipino, and Australian accents are supported through the end of the year.

To tell the truth, the idea of ​​Sanas bothered me at first. It felt like a concession to bigoted people who think their accent is superior and others are below. Technology will fix it … by accommodating the bigots. Big!

But while I still have a bit of that feeling, I can see that there is more to it than that. Basically, it’s easier to understand someone when they speak with an accent similar to your own. But customer service and tech support is a huge industry, mostly run by people outside of the countries where the customers are located. This fundamental divide can be resolved in a way that puts responsibility on the entry-level professional or on the technology. Either way, the difficulty of making yourself understood remains and needs to be addressed – an automated system just makes it easier and allows more people to get their jobs done.

It’s not magic – as you can see in this clip, the character and cadence of the person’s voice are only partially preserved and the result is far more artificial:

[youtube https://www.youtube.com/watch?v=ZTZ1T9VBa-Y?version=3&rel=1&showsearch=0&showinfo=1&iv_load_policy=1&fs=1&hl=en-US&autohide=2&wmode=transparent&w=640&h=360]

But technology is improving, and like any speech engine, the more it is used, the better it is. And for someone who is not used to the accent of the original speaker, the American accent version can be understood very well. For the person in the support role, this likely means better results for their calls – everyone wins. Sanas told me the pilots are just beginning, so no figures are available for this deployment yet, but testing has shown a significant reduction in error rates and an increase in call efficiency.

It’s definitely good enough to attract a $ 5.5 million seed round involving Human Capital, General Catalyst, Quiet Capital, and DN Capital.

“Sanas is committed to making communication easy and smooth so that people can speak confidently and understand each other wherever they are and whoever they want to communicate with,” said CEO Maxim Serebryakov in the press release announcing the funding. It’s hard to contradict this mission.

While the cultural and ethical issues of accent and power differences are likely to never go away, Sanas is trying something new that can be a powerful tool for the many people who need to communicate professionally and find that their language patterns are an obstacle to doing so. An approach worth exploring and discussing, even if we’d just get along better in a perfect world.


Source link

Share.

Leave A Reply