Technology
Google Assistant Can Now Better Understand and Pronounce Unique Names
Photo Credit: Google
Google Assistant update will help it better recognise speech and pronounce names. Google’s update for its virtual assistant will help it recognise and pronounce unique names along with helping it understand the context of the conversation shared by users. The search giant will apply its Bidirectional Encoder Representations from Transformers (BERT) system to improve the quality of the conversation between Google Assistant and the users. The update is currently available on Google smart speakers in the US and will expand to smartphones and smart displays in the near future.
Announcing the update via a blog post, Google is attempting to make its Assistant more user-friendly as it tries to better recognise speech and pronounce unique names than it did before. Google announced that users can now teach the virtual assistant to enunciate and recognise unique names. The company said that the Assistant will listen to the users’ pronunciation and remember it, without recording the voice of the user. Currently the feature is available only in English but Google plans to expand to include other languages soon.
Another important feature introduced with the update is the ability to recognise the speech of a user and the context of the conversation in an easier manner. For this, Google has rebuilt Assistant’s Natural Language Understanding (NLU) models so it can understand context while also improving its “reference resolution.” The search giant said that this update uses machine learning technology powered by BERT. The company claims that due to this technology, Google Assistant can now respond with nearly 100 percent accuracy to commands for alarms and timers. Also, it is planning to bring this feature to other devices than just Google’s smart speakers that it is currently available on.
Google also applied BERT to improve the quality of the conversation between the Assistant and the users. Google Assistant will now use the users’ previous interactions and understand what is being displayed on the screen to respond to follow-up questions aiding in a more natural conversation.