Scientists are developing a new artificial intelligence that works closer to the human brain

Scientists are developing a new artificial intelligence that works closer to the human brain



 Researchers are constantly attempting to develop artificial intelligence and close the intelligence gap with humans.Scientists have recently observed that some AI programs have begun to collaborate very closely with the human brain in recent experiments.

10 years prior, researchers showed large numbers of the most modern man-made intelligence frameworks utilizing tremendous information stores to "train" a counterfeit brain organization to recognize things appropriately.


Such 'closely supervised' training requires classifying data by humans which is very laborious, and neural networks often take shortcuts to learning to put things together with minimal information, sometimes superficially. For example, an artificial neural network (a group of computers linked together) might use the presence of grass to recognize an image of a cow, because cows are usually photographed in the fields.


A cross between animal intelligence and artificial intelligence


“Computers and AI programs don't really learn the subject matter, but they do a good job on the test,” said Alexei Efros, a computer scientist at the University of California, Berkeley.


Moreover, for researchers interested in the intersection of animal and artificial intelligence, this "supervised learning" may be limited in what it can reveal about the functioning of biological brains, since animals - including humans - do not use labeled data sets as the only source of learning Rather, the bulk of her experience is based on her exploration of the environment herself, which makes her gain a rich and powerful understanding of the world.


Today, some experts in computational neuroscience (the study of brain function in light of the information-processing properties of the structures that make up the nervous system) are beginning to explore automated neural networks that have been trained with little data that humans have categorized.

Matches with animal brain functions


Machine “self-learning” algorithms have proven to be very successful in learning human languages, and more recently have been successful in recognizing and distinguishing between images. In a recent study, computational models constructed to be close to mammalian visual and auditory systems and designed using self-supervised learning models for AI programs showed a closer match with brain function than their supervised learning counterparts.


And for some neuroscientists, it appears that artificial networks are beginning to reveal some of the actual methods that human and animal brains use to learn.


Neuroscientists have developed simple computer models of an optical system, using robotic neural networks, when monkeys were shown the same images as opposed to artificial neural networks. Once, scientists discovered patterns of communication between machines that were trying to detect sounds and smells.


Through repeated trial and error of artificial intelligence programs and connected neural robotic networks, scientists are beginning to see a unique model of learning approaching human style, the communications of Advancing Computing website reported.


"I think there's no question that 90 percent of what the brain does is self-supervised learning," says Blake Richards, a computational neuroscientist at the Institute of Artificial Intelligence in Quebec. Biological brains are thought to constantly predict, for example, the future location of an object as it moves, or the next word in a sentence, just as a self-supervised machine learning algorithm attempts to predict the gap in an image or piece of text. And brains learn from their mistakes on their own, too — just a small part of our brain's reactions come from an outside source that tells us the answer is wrong."


Close results


Richards and his team created a self-supervised model for machines that help give them an answer to various questions. They trained an AI that combined two different neural networks: one, called ResNet, was designed to process images. The second network, known as the recurrent network, can focus on moving objects.



Richards' team found that AI trained with ResNet was good at recognizing objects, but not at classifying motion. But when they split the network of connections in two, creating two pathways (without changing the total number of neurons), the AI ​​developed one to recognize static objects and one to move, allowing it to finally sort out the scenes it was being shown - which the scientists think is the way that our human brains work with.


To test the AI ​​further, the research team showed the artificial neural network and a group of mice a number of videos. It is noteworthy that the brains of mice have brain areas specialized in static images and other brain areas that are characterized by movement.


The researchers recorded neural activity in the mice's visual cortex while they watched the videos. Richards' team found similarities in the way artificial intelligence and the live brains of animals interact with those clips. During training, one of the pathways in the artificial neural network became more similar to regions that detect stationary objects in the mouse's brain, and the other pathway became more similar to regions that focus on movement.


In the end, however, scientists assert that the human or animal brain is full of so-called feedback connections, while current models of AI have very few, if any, of these connections, a crucial question of how advanced AI programs are. The most important distinguishing factors for the human brain.


Post a Comment

0 Comments