Google has announced that it will be rolling out some enhancements to Google Assistant that will allow it to better respond to more natural commands. This update changes things so much that some typical requests may no longer work like normal. Forcing you to have to re-approach the command. However, if things work out, it won’t take much guessing.
The update involves a change to how Google Assistant thinks (processes input) when interacting with Google Speakers and other products that feature the assistant. Using large neural networks, the assistant is expected to be able to better understand what you are trying to say. Without having to know the exact command needed for something.
Although many normal commands will likely still work just fine, the company hopes to dive deeper into how accurate its AI can predict what’s on your mind. It will do this by considering things like the room you are in, or what time of day you are asking for something.
For example, instead of saying “turn on the living room light” while sitting in the living room, you may just have to say “turn on the light”. It shouldn’t be too hard to guess which light you are referring to as why would you turn the bedroom light on from the living room with such a vague request?
It is able to determine location based on which Google speaker you are talking to (in the last example, it would be one in the living room). It can also use this information to better identify other rooms or areas around the home when giving broader commands that relate to devices in these other rooms. Including multiple commands or locations structured into the same sentence.
These updates will roll out in portions in the near future as the company weens users into the new experience. That way you won’t suddenly become shocked by how Google Assistant is responding to you. Eventually, this may leads us one step further to having a more real AI experience in the home. Only its name will be Google and not Jarvis, sadly.
My speakers have been doing this already.