Dec 19, 2019

Why Have Voice Assistants Started Supporting Conversations In Hindi Language?

Image Source: Gnc.com

Hindi, the fourth most spoken language in the world, has recently paved its way into the virtual assistant industry. As the usage of voice assistants has recently started to grow, tech companies are working on different tactics that help them to further expand their reach in India, one of the largest populations in the world.

For today’s discussion, we have compiled some of the main aspects behind voice assistants that support the Hindi language, and much more that you just can’t miss out!

Without further ado, let’s dive straight in.

Among the 1.3 billion people in India, English is still considered as a more predominant language than Hindi. Although English is spoken and understood by many, people usually like to hold their conversation in the Hindi language. Know why?

Why Is The Hindi Language Preferred More Over English?


Hindi is the most widely spoken first language in India over English. Derived from Sanskrit, Hindi is not only the most preferred way to communicate, but also represents India’s culture and tradition.

English speakers are generally people belonging to the elite class, and are educated. The topmost digital assistants like Google and Alexa, a digital assistant of an e-commerce giant Amazon, have recently introduced a new feature that allows the users to speak to their assistants in their native language – Hindi. 


The Usage Of Hindi Language In Voice Assistants:


Image Source: Princebaba.com

With a little research, we have come across a few of the main reasons behind the usage of Hindi language among voice assistants. 

Let’s have a quick look below:

  • In order to make Digital assistants more accessible to users and to break the language barrier, the digital voice assistants have adopted the Hindi language in the voice assistants. This allows users to give commands and receive results in Hindi. 
  • It has also been found that although people are normally comfortable with using English as their device language, while communicating they tend to either speak in Hindi or a mix of both Hindi & English, called “Hinglish”
  • The customers that are using smart assistants generally are from metropolitan cities, and are educated enough to speak in English as well. By supporting the Hindi language, customers from smaller towns and cities are likely to give smart voice assistants a try.
  • Consumers in India are showing more interest in searching any queries via voice search. Google, being one of the world’s top-most leading companies, has found that as soon as they have released the feature of Hindi voice search, they have found a rise in Indian users. Keeping that in mind, introducing the Hindi language in voice assistants will make a good deal for both companies and their users.
With this new feature of giving commands and receiving responses from the smart voice assistant in the same language, customers will be experiencing a personal touch that was missing until now.

What Is The New Multilingual Support In Digital Voice Assistants?

The war between the voice assistants has been in full swing since Siri was first created. 

At the moment, the only digital assistants supporting the Hindi language are Amazon’s Alexa and Google assistant. To make your task easier, we have detailed information on the latest features which both of the digital assistants upholds.

Alexa: 

Image Source: Diariodelviajero.com

The e-Commerce and retail giant Amazon have incorporated the Hinglish language in the Alexa, speaking and understanding Hinglish and performing the same function for supporting “Shudh(Pure) Hindi”.

“If you say, Alexa, alarm set karo, in that 'alarm' and 'set' are English words, but 'karo' is a Hindi word. But, if you ask somebody, he will say – haan, wo Hindi main baat kar raha hai (yes, he is speaking in Hindi). That's the way Indians speak in Hindi to a large extent, and that's why Alexa can speak and understand shudh (pure) Hindi. It also supports the Hinglish variant” - Dilip Kumar, Vice President of Alexa, said in a recent press conference held in New Delhi.

Do you know someone who speaks in pure Hindi in this era? Considering this, access to Hinglish in Alexa might come as a complete revolution by the digital assistants in the near future.

Google Assistant: 

Image Source: pocket-lint.com

To level up the game of digital voice assistants, Google has launched 8 other local languages which are – Gujarati, Kannada, Urdu, Bengali, Marathi, Urdu, Tamil, and Telugu in its Google assistant.

On top of that, users don’t need to change the language from the setting menu as they can start off with their respective spoken language just by speaking “Okay Google, Hindi may Bolo” or “Hey Google, talk to me in Hindi”.

With this latest update, users will be able to access and make use of this amazing feature, helping the company to reach more to their Indian users.

Final Thoughts:

People feel more comfortable speaking in their local language, so adopting the most popular language by the voice assistants will ease voice assistant companies into making their way into Indian households. Moreover, with an end on the language barrier, customers who don’t speak or understand English will feel more comfortable in using the assistants, and the usage of voice assistants shall continue to grow.






Jun 3, 2019

5 Artificial Intelligence Trends to Look for in 2019

Image Source: forbes.com

As can be expected, a lot of progress was made in the realm of artificial intelligence (AI) in 2018. If it wasn’t something new coming from Amazon’s Echo line or Google’s Google Home series, both of which employ AI to enable Alexa and Google Assistant to do everything they can, it was something related to autonomous cars, enterprise tools and artificial intelligence applications, or robots that are more lifelike than anything that’s come before them.

In other words, artificial intelligence was a hot topic in 2018, a statement that is true regardless of industry or social circle. After all, we already touched on the consumer electronics industry with our mention of smart speakers, the automotive industry with autonomous cars, a wide breadth of industries with enterprise tools, and the robotics industry with, well, robots. When we put all of this together, we arrive at the conclusion that artificial intelligence is changing lives daily and disrupting industries left and right.

Whether you think this is a boon for society that will allow us to reach new heights, or a detriment that will ultimately prove to be our downfall, you will most likely agree that the more you know about it, the better. To make sure that you know as much as you can, our discussion for today will revolve around five artificial intelligence trends you can expect to see in 2019.

1. Improvements to Facial Recognition

Facial recognition is one of the most widely known artificial intelligence applications that’s taken the world by storm. Like its name lends, it’s a technology used to identify people (their faces) using digital patterns taken from their facial features. Today, facial recognition is employed for a variety of uses like automatic tagging on Facebook, unlocking smartphones and computers, aiding in forensic investigations, and so on.

As we move through 2019, expect improvements to facial recognition technology that lead to higher accuracy and reliability. When this happens, also expect other uses for this AI technology to pop up and improve, such as using it to prevent retail crime, make advertising much more targeted, protect schools and other places from threats, and so much more.

2. Less Biased Data

Seeing as how we just covered facial recognition, it’s appropriate that the next AI technology of 2019 we’ll cover deals with biased data. In facial recognition, this comes in the form of AI systems having trouble identifying women and darker-skinned individuals because the vast majority of their training has been geared toward identifying white males. In other areas, biased data has been appearing more often as machine learning models are increasingly used for decision making processes related to hiring potential employees, lending mortgage loans, releasing prisoners from parole, and so on.

3. Integration of AI into New Technologies

It seems as if artificial intelligence is integrated into more and more technologies every year, and 2019 will be no different. For instance, one of the most anticipated advancements that’s still ongoing has been the convergence of AI and the Internet of Things (IoT). With machine learning behind them, IoT platforms will be able to gain real-time insights and detect patterns and anomalies about the information their sensors gather and generate. Now take this thought and extend it to varying technologies like blockchain, biometrics and quantum computing—the results are truly mind-boggling.

4. Advancements in Reinforcement Learning

Unlike supervised learning that uses labeled databases of input and output pairs, or unsupervised learning that finds connections between unlabeled data, reinforcement learning relies more on sequential decisions. In this way, it’s more “real” than the other learning methods because it’s more closely aligned to how we as humans learn. That is to say, instead of using data recognition, it uses experience and how to use it to move toward a goal. For example, how a game of chess is played.

Today, reinforcement learning is used in areas like gaming and robotics, but it’s not as prevalent as many would like for it to be. As more industries experiment with it in 2019, expect a lot more advancements coming our way.

5. Interconnecting Neural Networks

To put it simply, (artificial) neural networks are computer systems modeled after the human brain and nervous system. They aim to emulate how our brain works, such as how we learn from examples. With computer systems, this translates to them learning without being programmed with any task-specific rules. Unfortunately, though demand for neural networks is high everywhere it can be used, most find it a challenge to choose the best framework to develop their own models. Making this problem worse is the current incompatibility between competing neural toolkits that makes integration impossible. Thankfully, with tech giants like Microsoft and Facebook working together to develop an Open Neural Network Exchange (ONNX), other tech companies will soon follow suit and work on interconnecting neural networks themselves.

Let’s Take a Second Look At the Latest Trends of AI

As we just saw, 2019 has a lot in store for us, and a lot of it is thanks to artificial intelligence. From improvements in facial recognition technology, to growth in reinforcement learning, here’s what you can expect from AI as we move toward 2020:

1. Improvements to Facial Recognition
2. Less Biased Data
3. Integration of AI into New Technologies
4. Advancements in Reinforcement Learning
5. Interconnecting Neural Networks

Enjoy what’s to come!

Mar 6, 2019

What is Cloud Robotics?

Source: wicz.com
Whether it’s an imminent synthetic invasion, or a device that sweeps your home while you lounge in the couch, robots are something all of us have thought about at one time or another. And then there’s the cloud, an amorphous something that the vast majority of us have heard about, but not as many really know what it is. To quickly rectify this, think of the cloud as a metaphor for internet that works by storing data on servers that are maintained by cloud computing providers like Amazon with Amazon Web Services (AWS).

With that covered, it’s time to move onto what happens when you combine robots and the cloud: The emergence of cloud robotics. A little background information, “cloud robotics” was first used in 2010 by James Kuffner, an American roboticist and CEO of the Toyota Research Institute - Advanced Development, in the days when he worked at Google. Since then, we’ve seen and encountered many advancements in cloud robotics development and the cloud robotics market as a whole. Before we get to that, what exactly is cloud robotics?

What Is Cloud Robotics?

Cloud robotics is essentially the use of cloud computing to uphold robotic functionality. It’s a field of robotics that attempts to marry cloud technologies like cloud computing and cloud storage with robotics, resulting in a robot that is connected to the cloud via the internet. When this happens, the robot is endowed with everything the cloud has to offer, such as powerful computation, storage and communication resources, that results in a relatively lightweight and low-cost robot with an “intelligent brain” that is hooked up to the cloud and all the data it can offer. In human terms, imagine a person whose brain is always connected to the internet and can always pull information from it.

Source: creativemarket.com
As you can imagine, a robot with an cloud-connected brain comes with many benefits. For example, with the explosive growth of big data we’ve seen in recent years, it would have real-time access to libraries of images, videos, books, publications, maps, benchmarks, and pretty much anything that can be stored online. And then there’s cloud computing, which would equip it with the computational power needed for complex statistical analysis and learning, and the ability for collective learning in which it can connect with other robots and systems to share information and learn from it.

What Cloud Robotics Development Platforms Are Currently Available?

Cloud robotics may be a relatively new technological field, but we already have access to a couple of robot operating systems that meet the conditions for a cloud robotics development platform:
  1. It has to be based on the cloud
  2. Proof of concept has to work on robot simulations
  3. It must allow transfer to the real robot with a standard procedure

Google Cloud is working on a new cloud robotics platform that combines the power of AI, robotics, and the cloud. This new initiative will enable an open ecosystem of automation solutions that use cloud-connected collaborative robots. Our AI and ML services will make sense of the unpredictable physical world, enabling efficient robotic automation in highly dynamic environments. The result: fewer silos, more flexibility, and the freedom to innovate.

With this platform, developers will have access to all of Google’s data and AI capabilities that range from Cloud Bigtable to Cloud AutoML. Additionally, with access to Google Cartographer, which provides real-time simultaneous localization and mapping in 2D and 3D, robots will be able to process sensor data and localize in a shared map. Even better, as Google said, “even if your environment changes over time, our spatial intelligence services will analyze your workspaces and can be used to query, track and react to changes in the environment.

What Are Examples of Cloud Robotics Developments?

Up to this point, you’ve probably thought of cloud robotics as it pertains to the humanoid robots we’ve been reading and seeing about in books, movies and TV shows. Well, that may happen in the future (or is currently happening in select laboratories around the world), but for now the most common examples of cloud robotics you’ll likely encounter include self-driving cars and assistive robots.

Source: aarp.org
For example, consider self-driving cars like those from Tesla or Waymo, a self-driving technology development company and subsidiary of Alphabet Inc. (Google). The fact is that these cars use the cloud to gather information they need to properly and safely maneuver around other cars, people and objects. Now think of the future, when these types of vehicles become commonplace. In this future day, they will theoretically be able to “communicate” with each other and act in sync according to traffic patterns and conditions, which will greatly improve road safety and minimize accidents.

And then there are assistive robots like the Roomba that come certain cloud-enabled features that improve their functionality. Even though they’re not actually sweeping things with a broom, lifting objects to get the dirt underneath them, or using common sense not to clean over a liquid spill—things that a true future assistive robot would do—they still use cloud robotics technology to operate, and have been using them more and more with each model that is released.

Final Thoughts

Cloud robotics is no longer the future; it’s the present. Though we face some challenges that limit their current and future capabilities, such as effective load balancing and scalable parallelisation grid-computing, we’re soon reaching a point where fiction becomes fact and we have fully-functioning robots walking around with the cloud nestled safely inside. 

Feb 20, 2019

AI Powered Audio Trends of 2018

    Source: techhive.com
Although there have been many breakthroughs in the field of artificial intelligence (AI) in recent years—and I mean MANY—one stands above the rest for us, the consumers: smart speakers. Whether it’s Amazon’s line of Echo devices or Google’s Home offerings, we’ve long since adopted these wonderful gadgets into our daily lives. In fact, Google reported in 2017 that 72% of people who own a voice-activated speakers have incorporated it into their daily routine. That was 2017; imagine today, a couple of years and a whole bunch of updates and developments later.

AI Powered Audio Trends: Our Love for Smart Speakers

It’s 2019 and we’re even more in love with AI-powered smart speakers than we were in 2017, 2018, or any other year that doesn’t include the future. After all, when coupled with intelligent digital assistants like Alexa and Google Assistant who can carry out our commands, search the web and control other smart devices around the home (and even the world as long as there’s a WiFi connection!), it’s no surprise that the industry is expanding at breakneck speed. How fast, you ask? So fast that its market revenue is projected to jump from $4.4 billion in 2017 to over $17 billion in 2022—that fast.

As for the crowd favorites, you may have guessed that the two most popular brands in the smart speaker market are Amazon’s Echo/Alexa and Google’s Home/Google Assistant pairing, followed by smaller brands like Xiaomi and Apple. More specifically, even though Amazon had 35.5% of the smart speaker market share worldwide in the tail-end of 2018, their market dominance has been steadily declining as more competitors enter the market.

Source: pcmag.com
As a matter of fact, that right there is what we will be covering today… sort of. As we gear up for a brand new year with equally new advancements in the smart speaker game, our discussion for today revolves around the biggest audio trends of 2018 and how they’re leading up to the ones we’ll see this year. To be more exact, we’re first going to cover what Amazon had in store for us, and then round it out with Google.

Let’s get started!

Audio Industry Trends 2018: Amazon

Starting with Amazon, let’s begin by touching on all of their Echo variants. Keep in mind that these are Amazon’s speakers, as they have other products like the Echo Input and Link that are Alexa input device with no onboard speakers. With that in mind, here’s what Amazon has to offer:

  Amazon Echo: Amazon’s flagship smart speaker

  Amazon Echo Dot: A hockey puck-sized version of the Echo

 Amazon Echo Tap: A smaller portable version of the Echo

 Amazon Echo Look: A camera and smart speaker that can take full-length photos and 360-degree videos with built-in AI for fashion advice

Amazon Echo Show: A smart speaker with touchscreen display that can be used to display visual information to accompany its responses

 Amazon Echo Spot: A hemispherical device that has the same functions as an Echo Show, but smaller

 Amazon Echo Plus: A speaker with design similarities with the first-generation Echo, but also doubles as a smart home hub

 Amazon Echo Auto: An Echo device designed for cars

Now that we’ve covered the basics, let’s begin our coverage of what Amazon was up to in 2018 (even if it only touched on the topic of speakers a tiny bit):

 Amazon’s Echo Look was made available to everyone in the US in June 2018. The device, which analyzes your fashion choices and makes recommendations through AI and machine learning, had previously only been available through invites.

 Amazon unveiled a second generation of the Echo Show at an Alexa-themed product event on September 2018, and was formally released the following month. The device featured a complete redesign, with a mesh casing that replaces the black plastic one, speakers on the side and back to allow a larger display, and smart home integration, similar to the Echo Plus.

 The Echo Sub is a subwoofer released in October 2018 that not only connects to your speakers, but also responds to voice commands like, "Alexa, turn up the bass."

The Amazon Echo Input, a new device designed to connect to whatever speakers you want—basically an Echo Dot without a speaker—was released toward the end of 2018.

 A brand new Echo Dot with an improved finish, larger speakers and 70% louder sound was released in the second half of 2018.

Released toward the end of October 2018, a second generation of the Echo Plus now includes a built-in temperature sensor and smart home hub.

 Skype calling was made available to just about every Echo device in November 2018, an ability that was previously restricted to the Echo Show and Spot.

Amazon also announced the Echo Auto, a dash-mounted device that uses your phone's mobile connection to access Alexa capabilities. Offered by invite-only in December 2018, the Echo Auto is expected to be released later this year to the general public.

 BONUS - Echo Wall Clock: Yes, this is a clock, but it’s a clock announced in late 2018 that can connect to your Echo devices and be controlled with the sound of your voice!

BONUS - AmazonBasics Microwave: Announced with the Echo Wall Clock, the AmazonBasics Microwave is a microwave with an Alexa button that connects to the nearest Echo device so you can cook through voice commands.



Source: pcmag.com

Audio Industry Trends 2018: Google

With a solid idea of what Amazon graced us with last year, it’s time to move onto Google and Google Home, their line of smart speakers that includes the following variants:

● Google Home: Google’s original smart speaker that was released to compete with the Echo

 Google Home Mini: A smaller version of the Google Home with the same overall functionality

● Google Home Max: A larger version of the Google Home with with stereo speakers and Smart Sound, an adaptive audio system that uses machine learning to automatically adjust sound output based on factors like the environment

● Google Home Hub: Like the Echo Show, this is a smart speaker with touchscreen display that can provide visual feedback for queries

Just like we did for Amazon, now that we know Google’s offerings we’re going to move on to what that meant in 2018:

 Both Google Home and Google Home Mini made their official debut in Italy in March 2018, allowing our Italian friends to control their speakers with the sound of their voice.

 After Italy came India. Following Amazon in their quest to become the world’s go-to smart speaker brand, Google released Google Home and Google Home Mini in India in April 2018.

●Looking out for their user’s experience, Google announced in March 2018 that they would be able to pair any of their Google Home devices to their own Bluetooth speakers without requiring the use of a Chromecast streamer, allowing them to control their entertainment experience with their voice.

 Google released the first Smart Displays with the oogle Assistant in July 2018. While not a Google product in and of itself, they did authorize third-party manufacturers like Lenovo, JBL and LG to create smart speakers with Smart Displays housed in them.

 Officially released in October 2018, Google Home Hub is the newest member of the Google Home family, and this one comes with a screen.

● Hoping to “make storytime more magical,” Google partnered with Disney in November 2018 to release a select collection of Little Golden Books that, when read aloud, would play relevant sound effects and music to bring the story to life.

Final Thoughts


Even though we focused on Amazon and Google, the two giants in the smart speaker game, a lot more happened that did not involve them. For example, Apple made their debut in the smart speaker market in February 2018, with the release of HomePod, a speaker with Siri integration that allows for hands-free, voice control. Additionally, Sonos announced that Sonos One, their own smart speaker, would include Alexa onboard for voice control capabilities in 2017, and made it happen in 2018. Likewise, although they had originally planned to also include Google Assistant in 2018, this project was slated for an early 2019 release. One more before we wrap up: Facebook announced in October 2018 that they too were going to enter the smart speaker market—or rather, the smart device market—with Portal and Portal+, two new video communication devices for the home that will “dramatically change the way we keep in touch.” The best part? It offers hands-free control with Alexa.