History of AI
Arguably, Artificial intelligence or AI debuted at a conference at Dartmouth University in 1956. 11 years after the end of World War II, artificial intelligence was on the scene. At the time, there was a lot of optimism. Some people at the conference believed robots and AI machines would be doing the work of humans by the mid-1970s. Of course, that didn’t happen — what happened instead was that funding dried up and a period called “The AI Winter” began. That ostensibly lasted into the 2000s, when IBM’s Watson peaked a lot of interest in artificial intelligence again.
Now we’re at an interesting place. Like PCs in the early 1980s or the Internet in the early 1990s, artificial intelligence is “out there” and people know about it — Tom Cruise and Will Smith movies, for one — but it hasn’t impacted businesses just yet. (Well, not most businesses.) Prominent Silicon Valley executives, like Sam Altman of Y Combinator and Elon Musk of Tesla/RocketX, are beginning to do more around AI — including being scared of its potential ramifications.
In 1950, Alan Turing developed the Turing Test as a way of identifying machines that had intelligence indistinguishable from humans. His proposed test consists of text-only conversations between two participants. He argued that if a human participant is unable to tell the difference between communicating with a human versus a machine then that machine can be thought of as intelligent. His idea is masterfully captured in the show Westworld in a scene where an AI host named Angela asks,
It was also around the mid-1950s when modern AI research officially began, leading to the establishment of numerous AI research programs and institutions, with the purpose of creating general AI. However, researchers greatly under-estimated the difficulty of this task and instead developed simple rule-based AI that could only operate in extremely narrow domains. It wasn’t until the early 2000s when artificial neural network based AI gained traction, producing huge breakthroughs in computer vision, speech recognition, and natural language processing.
Almost all new AI technologies developed today (such as Microsoft’s Cortana, Amazon’s Alexa, Snapchat filters, Instagram filters, DeepMind’s AlphaGo, Netflix recommendations, Google Translate, Tesla’s self-driving cars) are powered by complex artificial neural networks. AI systems today are already starting to pass the Turing Test in certain domains, producing images and videos that are extremely difficult to distinguish real from fake. One of the concerns of AI technology is the automated generation of fake data and fake news, which could potentially mislead large populations of people.
Machine intelligence will inevitably advance, grow and re-place human intelligence, which has raised concern among many public figures and experts. Elon Musk believes that AI can be more dangerous than nuclear weapons and could potentially lead to human extinction if we are not careful. He has been an important advocate for the regulation of AI technologies. In February 2018, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation” report was published by a group of leading AI researchers from various institutions including Stanford University, University of Oxford and OpenAI. The 101-paged report details potential malicious applications of AI technologies as well as interventions, potential solutions and areas for further research.