What can make artificial intelligence really intelligent?

Automated stupidity

Despite the many concerns about artificial intelligence and its growing role in society, the fact is that today's generation of artificial intelligence programs is not at all intelligent .

There are basically two types of machine learning: deep neural networks, those responsible for the famous "deep learning", and reinforcement learning networks. Both are based on system training, using huge amounts of data, to perform a specific task, for example making a decision.

During training, the desired result is provided along with the task. Over time, the program learns to solve the task with ever faster accuracy, although no one understands exactly how the program works - it's the so-called "black box" of artificial intelligence .

"The problem with these machine learning processes is that they are basically completely dumb," says Professor Laurenz Wiskott of Ruhr University in Germany. "The underlying techniques date back to the 1980s. The only reason for today's success is that we have more computing power and more data at our disposal today."

But Professor Wiskott's team is trying to eliminate the stupidity of artificial intelligence and make it really smart.

Unsupervised artificial intelligence

Today artificial intelligence can be superior to humans specifically in the one task for which each program has been trained - it cannot generalize or transfer its knowledge even to similar tasks.

"What we want to know is, how can we avoid all this absurd and long training? And most of all: how can we make machine learning more flexible?" said Wiskott.

The strategy is to help machines autonomously discover structures in data. Tasks can include, for example, category formation or detection of gradual changes in videos. The idea is that this unsupervised learning allows computers to autonomously explore the world and perform tasks for which they have not been trained in detail.

"A task could be, for example, forming clusters," explains Wiskott. To do this, the computer is instructed to group similar data in search, for example, of a face in a photo. Turning the pixels into points in a three-dimensional space means grouping points whose coordinates are close to each other. If the distance between coordinates is greater, they will be allocated to different groups. This dispenses with the enormity of photos and their descriptions as used today.

This method offers more flexibility because this cluster formation is applicable not only to pictures of people, but also to cars, plants, houses or other objects.

Slow Principle

Another approach taken by the team is the slowness principle. In this case, it is not the photos that constitute the input signal, but moving images: If all the very slowly changing features are extracted from a video, structures appear that help construct an abstract representation of the environment. "Here, too, the goal is to pre-structure the input data," says Wiskott.

Eventually, researchers combine the two approaches in a modular way with supervised learning methods to create more flexible yet much more accurate applications.

"Greater flexibility naturally results in loss of performance," admits the researcher. "But in the long run, flexibility is indispensable if we want to develop robots that can handle new situations."

Please support by Sharing the article and also by visiting the ads in the post, your little click can help us to keep posting the beneficial Stuff, please leave a comment if you have any suggestions:
And Please follow us on Twitter 
Thank you 😊

Post a Comment

Previous Post Next Post