In a major work published on the pages of Technologyreview, Gary Markus, a professor at New York University and expert in cognitive science (cognition science), told us about the dangers of the widespread use of neural networks based on deep machine learning.
First, the scientist believes that technology has clear limitations. In particular, there has been talk for a long time about what is required to create a so-called "real artificial intelligence", which is suitable for a wide range of tasks, rather than a specific one, as is happening now. The existing systems have already reached the peak of their development and have nowhere to grow. Besides, it is impossible just to take and, let's say, to teach one artificial intelligence to drive a car and the other to make it repair and then to unite the systems by creating a universal assistant. Artificial intellectuals simply will not be able to interact because they "learned differently".
You can teach artificial intelligence to play Atari better than humans, but you can't make a good robot car. Although this task is also quite specialized. Deep learning is good at analyzing big data, but algorithms don't see a cause-effect relationship and don't perceive any change in conditions well. Slide the elements in the computer game by two or three pixels, and the AI trained will become ineffective. Make the Go field not square, but rectangular, and the artificial mind will lose even to a novice player.
How to make artificial intelligence smarter
For algorithms to become more effective, they need to be "taught differently". It is necessary to make sure that they begin to see the interrelation of objects and the consequences of interaction with them. In this case, you and I will serve as the best example.
Recruit interns, and in a few days they will start working on any problem - from law to medicine. Not because all of them are smart. It's because people have a common understanding of the world around them, not the private world.
And what Marcus offers is not new at all. The example described above is how scientists imagined "classical artificial intelligence". But in order for such artificial intelligence to work effectively, we need to program all possible outcomes in advance. And this is almost unrealistic.
The solution can be a kind of symbiosis of "classical artificial intelligence", which sees the interrelationships and gets solutions in an understandable way, and deep learning, capable of finding a variant of the solution through "trial and error". It can be a certain basic system of rules and regulations concerning the world around it. On their basis, systems can already develop themselves in a certain area. Real artificial intelligence must understand how everything works to understand the cause-effect relationships and to easily switch from one task to another. Modern systems created with the help of deep learning technology are simply not capable of doing so.