© 2017 Neon Nettle

Subscribe to our mailing list

Advertise Contact About Us Write For Us T&C's Privacy Support Us © 2017 Neon Nettle All Rights Reserved.

Google Creates AI That Gives Birth To Baby AIs That 'Outperform' Humans

Search giant may have created a replacement for humans

By: Daniel Newton  |@NeonNettle
 on 6th December 2017 @ 1.33pm
google unveiled its biggest challenge to date to automl  creating and training its own ai © press
Google unveiled its biggest challenge to date to AutoML, creating and training its own AI

If you can remember back in My 2017, researchers at Google Brain announced they had created an artificial intelligence (AI) named AutoM, an AI capable of creating its own AIs.

More recently, Google unveiled its biggest challenge to date to AutoML, and of course, the AI created another "child" AI that out all of its human-made counterpart AIs to shame.

Google researchers said the "child" AI, NASAnet, the creation of AutoM (AI), was tasked with recognizing objects — people, cars, traffic lights, handbags, backpacks, in a video in real time. 

AutoML would evaluate NASNet’s performance of the task, then use the information gathered from the data to improve its "child", even repeating the process "over and over".

Futurism reports: When tested on the ImageNet image classification and COCO object detection data sets, which the Google researchers call “two of the most respected large-scale academic data sets in computer vision,” NASNet outperformed all other computer vision systems.

According to the researchers, NASNet was 82.7 percent accurate at predicting images on ImageNet’s validation set. This is 1.2 percent better than any previously published results, and the system is also 4 percent more efficient, with a 43.1 percent mean Average Precision (mAP).

Additionally, a less computationally demanding version of NASNet outperformed the best similarly sized models for mobile platforms by 3.1 percent.

A View of the Future

Machine learning is what gives many AI systems their ability to perform specific tasks. Although the concept behind it is fairly simple — an algorithm learns by being fed a ton of data — the process requires a huge amount of time and effort. By automating the process of creating accurate, efficient AI systems, an AI that can build AI takes on the brunt of that work. Ultimately, that means AutoML could open up the field of machine learning and AI to non-experts.

As for NASNet specifically, accurate, efficient computer vision algorithms are highly sought after due to the number of potential applications. They could be used to create sophisticated, AI-powered robots or to help visually impaired people regain sight, as one researcher suggested. They could also help designers improve self-driving vehicle technologies. The faster an autonomous vehicle can recognize objects in its path, the faster it can react to them, thereby increasing the safety of such vehicles.

The Google researchers acknowledge that NASNet could prove useful for a wide range of applications and have open-sourced the AI for inference on image classification and object detection. “We hope that the larger machine learning community will be able to build on these models to address multitudes of computer vision problems we have not yet imagined,” they wrote in their blog post.

Though the applications for NASNet and AutoML are plentiful, the creation of an AI that can build AI does raise some concerns. For instance, what’s to prevent the parent from passing down unwanted biases to its child? What if AutoML creates systems so fast that society can’t keep up?

It’s not very difficult to see how NASNet could be employed in automated surveillance systems in the near future, perhaps sooner than regulations could be put in place to control such systems.

Thankfully, world leaders are working fast to ensure such systems don’t lead to any sort of dystopian future.

Amazon, Facebook, Apple, and several others are all members of the Partnership on AI to Benefit People and Society, an organization focused on the responsible development of AI.

The Institute of Electrical and Electronics Engineers (IEE) has proposed ethical standards for AI, and DeepMind, a research company owned by Google’s parent company Alphabet, recently announced the creation of group focused on the moral and ethical implications of AI.

Various governments are also working on regulations to prevent the use of AI for dangerous purposes, such as autonomous weapons, and so long as humans maintain control of the overall direction of AI development, the benefits of having an AI that can build AI should far outweigh any potential pitfalls.

Subscribe to our mailing list

Follow NN


PREV
BOOKMARK US
NEXT