in Twitter

History of Artificial Intelligence

AI, a sixty-year-old field, tries to mimic human cognition using mathematical logic, statistics, probability, computational neuroscience, and computer science. Developed during the Second World War, its advances are closely tied to computing and have allowed computers to execute more sophisticated jobs that humans could previously accomplish.

Some academics criticize the label since this automation is not human intelligence. The last level of their research—a "strong" AI that can handle various varied specialized issues autonomously—is unmatched by present successes ("weak" or "moderate" AIs, incredibly efficient in their training area). The "strong" AI of science fiction would need fundamental scientific discoveries, not just efficiency gains, to represent the world.

1940-1960: Birth of AI in the wake of cybernetics

Between 1940 and 1960, technical advances (accelerated by the Second World War) and the quest to understand how machines and living beings worked together shaped the period. Norbert Wiener, a cybernetics pioneer, sought to merge mathematics, electronics, and automation into "a whole theory of control and communication, both in animals and machines". Warren McCulloch and Walter Pitts created the first mathematical and computer model of the biological neuron (formal neuron) in 1943.

In 1950, John Von Neumann and Alan Turing transitioned from computers to 19th-century decimal logic (which dealt with values from 0 to 9) and machines to binary logic (which uses Boolean algebra to deal with more or less important chains of 0 or 1). Thus, the two researchers codified the design of modern computers and showed that they were universal machines that could execute programming. However, Turing's 1950 article "Computing Machinery and Intelligence" raised the question of machine intelligence for the first time and described a "game of imitation" in which a human should be able to tell whether he is talking to a man or a machine in a teletype dialogue. Despite its controversy (this "Turing test" doesn't seem to satisfy many specialists), this paper is widely credited as challenging the human-machine divide.

The term "AI" was coined by John McCarthy of MIT, which Marvin Minsky of Carnegie-Mellon University defines as "the construction of computer programs that perform tasks that humans perform better because they require high-level mental processes like perceptual learning, memory organization, and critical reasoning. The Rockefeller Institute-funded summer 1956 Dartmouth College meeting founded the field. Anecdotally, a workshop was a huge success. Only six people—including McCarthy and Minsky—remained throughout this formal logic-based study.

In the early 1960s, technology lost favor despite its fascination and promise (see Reed C. Lawlor's 1963 paper "What Computers Can Do: Analysis and Prediction of Judicial Decisions"). Memory was limited, making computer languages difficult to utilize. The LTM (logic theorist machine) software, which demonstrated mathematical theorems, was written in 1956 using the IPL, information processing language.

1980-1990: Expert systems

Stanley Kubrick directed the film "2001 Space Odyssey" in 1968, in which a computer - HAL 9000 (just one letter different from IBM's) highlights the whole set of ethical dilemmas raised by AI: would it represent a high level of intelligence, a benefit for mankind, or a danger? The film's influence will undoubtedly be non-scientific, but it will help to popularize the concept, much like science fiction novelist Philip K. Dick, who will never stop wondering whether robots may sense emotions.

The introduction of the first microprocessors at the end of 1970 relaunched AI and ushered in the golden era of expert systems.

The way was really started at MIT in 1965 with DENDRAL (an expert system specialized in molecular chemistry) and at Stanford University in 1972 with MYCIN (a system specialized in the detection of blood disorders and prescription medications). These systems relied on a "inference engine," which was designed to be a logical reflection of human thinking. By inputting data, the engine produced expert-level replies.

The promises predicted rapid growth, but the mania would fade by the end of 1980 or early 1990. Programming such information took a significant amount of work, and between 200 and 300 rules, there was a "black box" effect that made it unclear how the computer reasoned. As a result, development and maintenance became incredibly difficult, but also quicker and more cost-effective. It should be noted that in the 1990s, the phrase "artificial intelligence" was almost banned, and more moderate alternatives, such as "advanced computing," had even entered academic jargon.

Deep Blue (IBM's expert system) defeated Garry Kasparov in a chess game in May 1997, fulfilling Herbert Simon's 1957 forecast 30 years later, but did not support the funding and development of this kind of AI. Deep Blue operated using a systematic brute force approach in which all potential motions were analyzed and weighted. The loss of the human remained immensely significant in history, but Deep Blue had only managed to tackle a very narrow perimeter (the laws of the chess game), far from being able to represent the complexity of the world.

Since 2010: a new bloom based on massive data and new computing power

The 2010 surge in the field has two causes. First, huge data access. Previously, picture classification and cat identification techniques required sampling. Google searches now yield millions.

After then, computer graphics card processors' great efficiency accelerated learning algorithm calculations. Iterative processing might take weeks before 2010 to finish the sample. These cards' computational capability (more than 1,000 billion transactions per second) has permitted significant advancement at a low cost (less than 1000 euros per card).

Watson, IBM's IA, will beat two Jeopardy winners in 2011 because to this innovative technology. An AI will detect cats in videos at Google X in 2012. This final activity has taken almost 16,000 processors, but the potential is great: a computer learns to discern. AlphaGO, Google's Go AI, will defeat Fan Hui, Lee Sedol, and herself in 2016. Go has a combinatorics more crucial than chess (greater than the number of particles in the universe) and cannot achieve such big achievements in sheer strength (as Deep Blue did in 1997).