Google wants to introduce Artificial Intelligence into its services, and for that it maintains numerous experiments that sound, at least, bizarre. The last one focuses on the rapid detection of human movement, to compare it with an endless catalog of images performing the same pose.
The AI selects the image that most closely resembles our movement from a catalog of 80,000 photographs
The experiment is called Move Mirror and selects which photo best suits the movement of the person by choosing from 80,000 files . In these early stages of research it is not very practical, but surprising.
Google has created a web page for everyone who wants to test the system to do so, activating the camera of the mobile or computer. In fact, Artificial Intelligences, like neural networks, feed on experience and sharpen their responses with it.
How does it work? Once the use of our camera is allowed, the AI identifies the points of movement of a human body, the joints, thanks to a computer vision model, called PoseNet . Then it remains for the AI to select the photo that most closely resembles our movement, and its speed is surprising, since as we move, the selection of the algorithm will also change.
The image library comes from TensorFlow.js, an online library that runs within the browser , so users are not actually offering images to Google. The screening happens within the user's own browser.
In fact, this is one of the issues that are intended to be used to reliably develop Artificial Intelligence available to the user: privacy is always kept in the hands of the individual, who does not have to give an iota of information to third parties to carry out searches.
If it were to become an official Google tool, it would imply that from searches by written description, or the comparison between two images, we could go on to express with our own bodies what we are asking Google for. Move Mirror is available here for anyone who wants to experience the speed of Artificial Intelligence with their own eyes.