Events are a great way to gather an industry that spans boundaries, domains and technologies. It allows us to collectively address challenges that several players face, apply learnings from leading research scientists or think-tanks, and solve global problems that go beyond business interests. We were successful to have been part of the Interspeech community by participating at the 2018 edition in September. Interspeech has established itself as a research-based, multi-topic event community covering speech optimization and technology development. At Sayint, we are in the business of obtaining actionable insights from customer and agent conversations, and given our business focus in India, our participation clearly resonated with the theme ‘Speech Research for Emerging Markets in Multilingual Societies’.
With the focus directed towards emerging economies like India, the sessions were geared towards addressing societal and business challenges in this geographic where 29 languages are spoken by more than a million people each. This throws up unprecedented challenges where ideal repurposing of a globally successful speech recognition model doesn’t succeed. Here is where technology and local adaptation comes to the rescue:
- Machine learning and artificial intelligence technologies have advanced rapidly over the recent years
- They have helped the development of robust speech tools and analytic solutions in short amounts of time.
- Capable research and development by local language experts must supplement technology efforts
At Interspeech, the range of topics covered over five days was so comprehensive that every participant would have left learning more than anticipated.
The sessions in speech analysis and representation, audio segmentation and pitch detection helped us understand market trends and compare it with our award-winning capabilities like mood detection or sentiment analysis that is critical to our clients engaging with their customers. Additionally, the sessions on speech science for end-user applications, rich transcription, innovation in speech technologies like speech synthesis and recognition echoed with our fields of global product development at Sayint and provided incredible insights to our 200+ team of voice data collectors. In fact, the development teams are responsible for launching advanced products like keyword spotting and grouping in our speech models.
In fact, another objective at Interspeech was to share our expertise into product R&D done on proprietary AI, ML and neural network technologies, and in turn, understand new trends and capabilities based on latest deep learning and automation technologies being released in the industry. We took quite a few steps in strengthening the speech accuracy of our product portfolio, thanks to attending select sessions that addressed these topics. Events like Interspeech are immense platforms to network.
We were sucessful to get time with industry giants like Baidu, Speech Ocean, Google, Microsoft, Amazon, DRDO, Apen, behaviouralsignals, and Talk Desk. With our teams delegated to interact and understand key innovations and focus areas of these companies in the speech analytics space, they came back inspired by the wonderful things that were in the works by their development teams. It also revealed how the Indian market was becoming a focus area of these companies to use it as a testing ground and uncover revenue opportunities since the region generates over 30,000 terabytes of data traffic every day. Succeeding in a multi-lingual region is an acid test of their versatility.
Our Observations at Interspeech
What also came to the forefront during our observations with leading companies like Facebook and Google was their requirement of Indian speech datasets to cater to their wide customer bases in India, especially those who speak regional languages. Given some of their existing conversational analytics capabilities in Hindi, furth,er development that also covered other regional languages like Bengali or Konkani could open up new business opportunities for them. We interacted and positioned the below capabilities that found interest and generated enquiries from their teams:
- Sayint manages a 10,000-hour database of Indic speech
- We manage 40+ language experts working on improving regional language speech recognition
- Sayint pioneered and built language models over a period of time
- The resulting Automated Speech Recognition (ASR) output is about 82 % (close to the best industry standards)
- Sayint’s speech analytics offers insights through an omni-channel approach (chat, email, voice, text and social feeds)
We understood how our continued focus to create strong regional language datasets and convert them into ready packages to be deployed by clients was a compelling concept to some of the world’s leading technology companies. It opened up channels for partnerships, technology collaborations and even generated sales interest. Additionally, our objective to tap global geographic markets grew even stronger, especially regions like the European Union and Asia. We are looking to tap into data and language model building to build our Acoustic and Speech to Text Transcription models. We remain focused to improve our ASR scores.
If you were at Interspeech 2018 and are interested in knowing more about how we could partner with you, or are looking for information into our products, get in touch with us and we would be more than happy to connect with you. Here’s hoping we see you at Interspeech 2019.
Product Marketing Manager at Sayint
Chaitanya has been a Product Marketing Manager at Sayint. He is responsible for providing actionable insights to Sayint to help increase conversion rates. he also owns content strategy brand positioning, SEM activities along with GTM Marketing. Prior to sayint, He worked as a Digital Marketing manager at various SaaS companies. Chaitanya holds a Masters degree from Cambridge Judge Business school in Strategic Product Marketing.