Just a few months ago a team of researchers at Google Brain presented their newest creation, an artificial intelligence (AI) called AutoML, that has the ability to create its own AIs.
However, recently they challenged AutoML by making it create another “child” AI that could outperform all of its human-made counterparts.
For this particular challenge, called NASNet, the AI was supposed to create a tech able to recognize objects such as people, cars, traffic lights, handbags, backpacks an much more in a video in real-time.
NASNet showed outstanding results when tested on the ImageNet image classification and COCO object detection data sets. According to the researchers, these are “one of the most respected academic data sets in computer vision”.
With an accuracy of more than 80 percent at predicting images on ImageNet’s validation set, this AI has performed way better than any previously published results. The system is also more efficient by 4 percent, with a 43.1 percent mean Average Precision (mAP).
A VIEW OF THE FUTURE
What gives many AI systems the ability to perform these specific tasks is machine learning with a simple algorithm behind it. The algorithm learns by accumulating data, however this is a process that requires huge amounts of effort and time.
NASNat has highly wanted accurate and efficient computer vision algorithms that could be used in creating sophisticated AI-powered robots. A researcher suggested that this AI can even help visually impaired people to regain their sight, or designers who wish to improve self-driving vehicle technologies.
Researchers believe that NASnet could be useful for creating many applications.
“We hope that the larger machine learning community will be able to build on these models to address multitudes of computer vision problems we have not yet imagined,” they wrote in their blog post.
However it’s a bit concerning when it comes to an AI creating a “child” AI, mostly because of the possibility of “passing down unwanted biases to its child”. There is also a possibility that AutoML creates a system that our society can’t keep up with, but thankfully world leaders are working fast to prevent such systems to lead to that unwanted future.
A research company owned by Google’s parent company “Alphabet” recently announced the creation of a group that will focus its work on the moral and ethical implications of AI. More governments are also working on some sort of regulations that will be used in order to prevent a dangerous use of the AI, such as creating autonomous weapons.
This artificial intelligence could be very useful but as long as humans maintain control of the overall direction of AI development.