When we have true artificial intelligence?
Research field of artificial intelligence has come a long way, but many believe that it is officially born when a group of researchers from Dartmouth College got together in the summer of 1956. Over the past few years, computers have improved many times; Today they perform the computational operations much faster than humans. Given all this incredible progress, optimistic scientists could understand. A brilliant computer scientist Alan Turing suggested the emergence of thinking machines a few years earlier, and the scientists came to a simple idea: intelligence, in fact, it's just a mathematical process. The human brain - the machine to some extent. Highlight the process of thinking - and the machine will be able to simulate it.
Then the problem did not seem particularly difficult. Dartmouth researchers wrote: "We believe that significant progress can be made in one or more of these problems if a carefully selected group of scholars will work on this together over the summer." This proposal, incidentally, contained one of the first uses of the term "artificial intelligence". Ideas were many: perhaps an imitation of the action of the brain neuronal circuitry could teach machine abstract rules of human language.
The scientists were optimistic, and their efforts were rewarded. They had a program that seemed to understand human language and can solve algebraic problems. People confidently predicted that machine intelligence to the already twenty years the human level appears.
Successfully matched and that the area of forecasting, when we have the artificial intelligence of the human level, was born around the same time as myself AI. In fact, everything goes back to the first article of Turing "thinking machines", in which he predicted that the Turing test - during which the machine has to convince the person that she is also a man - will be passed in 50 years, by the year 2000. Today, of course, people are still predicting that this will happen in the next 20 years, among the famous "prophets" - Ray Kurzweil. Opinions and forecasts so much that sometimes it seems that AI researchers put the answering machine on the following sentence: "I have already predicted what would be your question, but no, I can not exactly predict it." The problem with trying to predict the exact date of the appearance of human-level AI is that we do not know how far we can go. It's not like Moore's Law. Moore's Law - the doubling of computing power every couple of years - makes a specific prediction about a particular phenomenon. We understand about how to move forward - to improve the technology of silicon chips - and we know that it is not restricted to our current approach (not yet begin to work with the chips on an atomic scale) in principle. About artificial intelligence the same can not be said.
Stuart Armstrong study was devoted to trends in these projections. In particular, he sought two main cognitive distortions. The first was the idea that in the field of AI experts predict that the AI will arrive (and make them immortal) exactly before they die. This critique of "admiration nerds", which is subject to Kurzweil - his predictions are motivated by fear of death, the desire for immortality and fundamentally irrational. Creator superintelligence becomes almost an object of worship. Usually criticize people who work in the field of AI and know first-hand about the frustrations and limitations of modern AI.
The second idea is that people will always choose the time period of 15-20 years. This is enough to convince people that they are working on something that will be a revolution in the near future (because people less attracted to the efforts that will manifest through the ages), but not so soon that you will immediately find yourself a damn wrong. People are happy to predict the appearance of AI until his death, but it is desirable that it was not tomorrow or in a year, as well as 15-20 years.
Progress in measuring
Armstrong said that if you want to evaluate the accuracy of a specific forecast, there are many parameters that you can look. For example, the idea that human-level intelligence will be developed at the expense of the simulation of the human brain, at least gives you a clear scheme to assess progress. Every time we get a more detailed map of the brain, or to successfully simulate a certain part of it, and thus progressing towards a specific goal, which is expected to result in human-level AI. Maybe 20 years will not be enough to achieve this goal, but at least we can measure progress from a scientific point of view.
And now compare this approach with the approach of those who say that the AI or something conscious, "will", if the network is quite complex and will have sufficient processing power. Perhaps it is because we represent the human intellect and consciousness arising in the course of evolution, but evolution took place billions of years, not decades. The problem is that we have no empirical data that we have never seen how consciousness arises from a complex network. We not only do not know if this is possible, we can not know when it is waiting for us, because we can not measure progress along the way.
There is an enormous difficulty to understand what the problem is really difficult to perform, and it haunts us with AI birth to the present day. Understand human language, randomness and creativity, self - and all at once, it is simply impossible. We have learned to process natural language, but whether our computers understand that they are treated? We did the AI, which seems to be "creative," but is there in his actions even though the proportion of creativity? Exponential self-improvement, which will lead to a singularity in general seems to be something transcendental. We do not understand what intelligence is. For example, in the field of AI experts have always underestimated the ability of AI to play go. In 2015, many thought that the AI learns to play go until 2027. But it took only two years rather than twenty. Does this mean that the AI in a few years to write the greatest novel? Conceptually understand the world? Close to the level of human intelligence? Unknown.
Not the man, but wiser people
Perhaps we are wrong to consider the problem. For example, the Turing test was not passed in the sense that the AI was able to convince the person in conversation, that he speaks to man; but AI computing capabilities, as well as the ability to recognize patterns and drive a car already far exceed the level available to man. The more decisions are made by the algorithms of the "weak" AI, the greater grows the Internet of Things, the more data fed to neural networks and the greater will be the impact of this "artificial intelligence".
Perhaps we do not yet know how to create a human-level intelligence, but in the same way we do not know how far we can go with the current generation of algorithms. While they are not even close to similar to those terrible algorithms that undermine social order and become a kind of hazy superintelligence. And just as it does not mean that we must adhere to the optimistic forecasts. We have to make sure that the algorithms will always be laid value of human life, morality, morals, to the algorithms were not entirely inhuman.
Any projections must be divided in two. Do not forget that in the early days of AI it seemed that he would succeed quickly. And today, we think so too. Sixty years have passed since then, scientists gathered at Dartmouth in 1956, to "create intelligence for twenty years," and we still continue their business.