AI – key concepts

AI has many definitions based on the nature of its techniques, its usage, and also the timeline of its research. However, the most common definition is as follows—AI is the intelligence and capability exhibited by a computer to perceive, learn, and solve problems, with minimal probability of failure.

The ability of AI to compute and achieve results within a shorter period of time than humans has made computers the cornerstone of automation across various industries. The computational work of humans is often prone to errors, is time-consuming, and exhibits diminishing accuracy as the problem gets harder to solve. However, computers have been able to fill this role for a long time, from the early beginnings of automation that can be observed in many passive forms in our daily life. One of the best examples of such automation is the introduction of Optical Character Recognition (OCR), which converts embedded text in an image or document into a text source ready for computation. Computers enabled with OCR devices are more accurate and consume less time in reproducing the content than humans. Similarly, barcode scanners have led the way to faster checkout times at retail shops. Although the early systems were not completely intelligent per se, they are still recognized for their efficiency.

Although there was a lack of general criteria for AI in the early days, we will consider the major efforts made by researchers over the past eight decades in the following section.

History of AI

Numerous depictions of AI in the form of robots, artificial humans, or androids can be observed in art, literature, and computer science dating back to as early as the 4th century BC in Greek mythology. AI research and development gained mainstream progress in the early 20th century. The phrase artificial intelligence was coined during a summer workshop held at Dartmouth College in 1956 in New Hampshire. The workshop was called the Dartmouth Summer Research Project on Artificial Intelligence and was organized by Prof. John McCarthy, one of the mathematics professors at Massachusetts Institute of Technology (MIT). This workshop led to the development of AI as a special field within the overlapping disciplines of mathematics and computer science.

However, it is also notable that two decades before the Dartmouth workshop, the British mathematician Alan Turing had proposed the concept of the Turing machine, a computational model that can process algorithms, in 1936. He later published the paper Computing Machinery and Intelligence (https://www.csee.umbc.edu/courses/471/papers/turing.pdf) in which he proposed the concept of differentiating the response of machine intelligence from a human. This concept is widely known as the Turing test today.

In the following diagram, we can see how the Turing test is performed to test whether the response from an AI can be distinguished from a human response by another human:

Fig 2.1: The Turing test performed by an interrogator ( C) between an AI ( A) and a human ( B).

You can check out the preceding diagram by Juan Alberto Sánchez Margallo in more detail at https://en.wikipedia.org/wiki/Turing_test#/media/File:Turing_test_diagram.jpg. Here is the license to the diagram, https://creativecommons.org/licenses/by/2.5/.

Almost a decade after the summer workshop at Dartmouth College, the first chatbot, named ELIZA, was showcased by AI researcher Joseph Weizenbaum at MIT in 1966. It was one of the first few chatbots to attempt the Turing test. After the invention of ELIZA, a new range of expert systems and learning patterns evolved over the next two decades until the 1980s.

With the preceding basic understanding of AI and its history under our belts, let's consider some of the impediments faced by researchers in the early days of AI in the following section.

AI winter

AI winter is a term used by many in the IT industry to define a period of time when AI researchers faced many challenges, leading to severe cuts in funding and the slowdown of AI as a specialized field.

During the early 1970s, academic research and development in the field of AI were suddenly halted by the US and British governments, due to some unreasonable speculations about AI and the criticisms that followed. The complex international situations at the time also contributed to the complete halt of many AI research projects.

It is commonly observed that AI winter started in the early 1970s, but ended nearly two decades later due to research failures, setbacks in motivation, and consensus among government bodies, as well as the collapse of some of the original foundational goals set prior to the commencement of a few research programs.

Now that we have learned a bit about the history of AI, in the following section, we will explore the different types of AI and the different forms in which AI is manifested.