How You Would Use Google Assistant

There is a gaggle of technology out there which can be stringed together into something totally new. For instance, the Google Assistant is an AI which can read a text and then give you suggestions based on the context of the moment. The latest AI iteration for Google has shown the direction where AI can go, and why consumers should pay attention.

The Technology Exists

Scanning pictures is an old technology. It has been in existence since before Windows. When you scan a document or a picture, you can generate a text file. It just took too long before the text was readable. For a long while, the output text needed to be cleaned up before it can be accurate.

This is where Captcha may have helped Google a lot. Google has a captcha program which uses random text from pictures and people answer with what text the image shows. For close to a decade almost every site had a captcha showing these snippets of pictures being decoded by humans. The pictures are from real documents, however, the resulting fine-tuned technology can be used or re-purposed for better text recognition.

Speech to text is another technology that has been around for a long while. It has been around for almost 30 years as a commercial product. The problem with the technology is that you have to read a standard text to calibrate the machine. Another problem is that each machine only allows a limited number of persons to use it. Nowadays, voice is the principal way to talk to an AI. Most of the voice recognition problems have been solved with the use of context information.

It is like if you were partially deaf or were otherwise in a noisy environment, and still carrying on a conversation. It is not necessary to understand every syllable; it is only necessary to get the gist of the conversation, and pull up background info to understand what the other person was saying. In the same manner, voice to text can make use of sound-alike phrases and a large library to cross-reference.

Besides, if the machine were wrong, it would be happy to be corrected. It would learn as a result of being corrected.

Getting Information from a Picture

Machine learning as it is directed towards people by AI is a constructive way of piecing together information and coming up with valid and relevant information. After you take a picture of a poster, the AI should be able to tell you if you have a prior engagement by looking at your calendar. The AI should also be able to research which of your friends would be interested in the contents of the poster. If it was a poster about a restaurant promo, the AI could check your contacts and address book, to find out who lives nearby. It can also cross-check friends posting on social media on who likes this particular food, or who has eaten at this restaurant.

This method of association is what machine learning and big data are all about. There is a lot of information online about you and your friends. If an AI can access all your social media posts, as well as your friends’ and followers’ posts, then it can get more information and show you more relevance.

With more interaction on your part, it can come up with a list of things to do about the poster information. It does not matter if it is a sale, or about the aesthetics of the poster design. What is important is that the AI can come up with a list of relevant information you might want to choose. After a while, the AI can refine the list according to your choices on prior lists.

More Data Courtesy of Google

If you are wondering why Google has a lot of data about you, you should be more worried that Amazon Alexa can also do the same thing, even if they do not ostensibly have a repository of your data. All that they have is information about your browsing the Amazon website. Think about all the information which Alexa can get from that. Now think about all the information that you have freely given to Google. The videos you watched on YouTube, the various searches, as well as the information from Android. All of this information is in play when you use their AI.

Conceivably, Google can get more data from social media, all because you use them on the cellphone, or from the computer. If ever you lose your cell phone, there is a wealth of information that can be mined from your phone by the person who found it. That is if they know how to.

AI is about continuous learning. The more you use, the more it becomes useful. At the same time, the more data it has to play with, the better its responses.

AI is not about becoming sentient. It is about your cellphone or your smart devices being able to help you because it already knows you. With more information about you, it would have more data to come to a relevant answer. Relevance is measured regarding whether you have accessed that data before, or if you have searched or posted it before. The information is there; it only needs a tool to dig it out. In this case, AI is that tool who’s job is to dig information about you based on the things that you do with your cellphone or desktop.