pic from https://upload.wikimedia.org/wikipedia/commons/5/5f/Kanangra_winter_wonderland.jpg |
Takeaway: AI has a surprisingly long history, marked by periods of optimism and support followed by disenchantment. Now that we're at a new high point, we appear poised for the inevitable third round of AI winter. But perhaps this round will be different.
Today we have all kinds of “smart” devices, many of which can even be activated by voice alone and offer intelligent responses to our queries. This kind of cutting-edge technology may make us consider AI to be a product of the 21st century. But it actually has much earlier roots, going all the way back to the middle of the 20th century.
AI Roots
It may be said that Alan Turing’s ideas for computational thinking lay the foundation for AI. John McCarthy, Professor of Computer Science, Stanford University, gives credit to Turing for presenting the concept in a 1947 lecture. Certainly, it is something Turing thought about, for his written work includes a 1950 essay that explores the question, “Can machines think?” This is what gave rise to the famous Turing test. (To learn more, check out Thinking Machines: The Artificial Intelligence Debate.)
Even earlier though, in 1945, Vannevar Bush set out a vision of futuristic technology in an Atlantic Magazine article entitled “As We May Think.” Among the wonders he predicted was a machine able to rapidly process data to bring up people with specific characteristics or find images requested.
Emergence
Thorough as they were in their explanations, none of these visionary thinkers employed the term “artificial intelligence.” That only emerged in 1955 to represent the new field of research that was to be explored. It appeared in the title of “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence.” The conference itself took place in the summer of 1956.
As they were poised at the beginning of the decade of optimism, researchers expressed confidence in the future and thought it would take just a generation for AI to become a reality. There was great support for AI in the U.S. during the 1960s. With the Cold War in full swing, the U.S. didn’t want to fall behind the Russians on the technology front. MIT benefited, receiving a $2.2 million grant from DARPA to explore machine-aided cognition in 1963.
Progress continued with funding for a range of AI programs, including, MIT’s SHRDLU, David Marr’s theories of machine vision, Marvin Minsky’s frame theory, the Prolog language, and the development of expert systems. That level of support for AI came to an end by the mid-1970s, though.
And now winter is coming