The beginnings of AI

Twitter chats worth investigating to build your brand on social
Oct 11, 2019
What is Machine Learning’s relationship with AI?
Oct 11, 2019

Welcome to the beginning of our 12 part series on the topic of Artificial Intelligence or AI. We will be releasing a new article focused on a specific area of AI. The complete series should provide an in-depth look at this highly fascinating subject and its deepening use case which evolves daily as we progress into the Fourth Industrial Revolution.

Image courtesy zdnet.com

Since the beginning of time, mankind has been on a quest to investigate the concept of consciousness, human intellect and the desire of human beings towards greatness or lack thereof. Philosophical debate about the creation of knowledge, reasoning and logic is witnessed across the ages: Pythagoras (c. 580 – c. 500 BC) believed the deepest reality to be composed of numbers, and that souls are immortal; Epitectus in the years 55-135 AD emphasised ethics of self-determination and, Immanuel Kant (1724-1804) explored deontology and was a proponent of synthetic a priori truths. [1]

To link into the first thinkings of AI, it is worth listing Catalan poet and theologian Ramon Llull. He published Ars generalis ultima (The Ultimate General Art) in 1308, further perfecting his method of using paper-based mechanical means to create new knowledge from combinations of concepts.

Short timeline of AI progression in the last century

Definition/s

In laying the foundation, let’s firstly present some global definitions of Artificial Intelligence before moving into its definitive history.

(A) Generally speaking, the term is used to describe systems whose objective is to use machines to emulate and simulate human intelligence and the corresponding behavior. This can be accomplished with simple algorithms and pre-defined patterns, but can become far more complex as well. – Androidpit.com

(B) The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. – oxforddictionaries.com

(C) The field of computer science dedicated to solving cognitive problems commonly associated with human intelligence, such as learning, problem solving, and pattern recognition. – amazon.com

We have included one technology company definition (Amazon), to present an example of how the meaning is also shaped by the contributions which every tech company is making to this deeply developing field. Google AI, Facebook AI Research, IBM, Apple and Microsoft all have their own public definitions that showcase the innovation they are driving in products for public good.

John McCarthy – Known as the Father of AI

Let’s focus on the period from Alan Turing’s Imitation Game in 1950 onwards. Some of the most important developments in thinking, development research and acceptance of AI as a field happened from this period.

Artificial Intelligence was first coined by John McCarthy in 1956. Main advances of the past 60 years have been focused on search algorithms, machine learning algorithms, and integrating statistical analysis into understanding the world at large [Washington01].

The Turing Test

English Mathematician Alan Turing published a paper entitled “Computer Machinery and Intelligence” in 1950. He is often referred to as the father of modern computer science.[2] This was one of the first insights into the field of Artificial Intelligence though it wasn’t known by that name at the time. “Can Machines think?” was the opening question of this paper. At the time Turing was the Deputy Director at the University of Manchester’s Computing Laboratory. The premise of the paper outlined a test called the Imitation Game, asking the question “ If a computer could imitate the sentient behavior of a human would that not imply that the computer itself was sentient?” The game was a simple test that set out to prove that machines think. The graphic below helps give a good sense of how the Imitation Game would determine a computer’s ability to think.

Turing Test – Techtarget.com

Overall, potential challenges with Turing’s test falls into one of two categories [Washington06]:

  1. Does imitating a human actually prove intelligence or is it just a hard problem?
  2. Is intelligence possible without passing the Turing test?

Co-founder of the MIT Media Lab Nicholas Negroponte provided another variation to the Turing Test. His test proposed that a machine work alongside a human to demonstrate its intelligence capability rather than being interrogated by a human being. This test was thought to be more difficult but echoes today’s future thinking of humans working alongside AI to achieve goals.

Overall, potential challenges with Turing’s test falls into one of two categories [Washington06]:

  1. Does imitating a human actually prove intelligence or is it just a hard problem?
  2. Is intelligence possible without passing the Turing test?

Co-founder of the MIT Media Lab Nicholas Negroponte provided another variation to the Turing Test. His test proposed that a machine work alongside a human to demonstrate its intelligence capability rather than being interrogated by a human being. This test was thought to be more difficult but echoes today’s future thinking of humans working alongside AI to achieve goals.

LISP

An acronym for list processing, LISP was developed by John McCarthy in 1959 and is a commonly used language for Artificial Intelligence development. The language’s ability to compute with symbolic expressions rather than numbers makes it convenient for AI applications. [3]. The Symbolic expression comes from abstract symbols that are used to represent knowledge. This kind of processing pursues the ideas that human thinking can be constructed on a hierarchical, logical level. [4]

Early Test use case: Applying AI testing with Chess

Intelligence has always been linked with chess. Claude Shannon was the first to write a paper about developing a chess playing programme in 1950. The concept of Brute Force – Type A programmes vs Intelligent and Strategic Type B Programmes was at the heart of this paper. How do we understand this?

  1. The Type A – Brute Force – Programming would use pure brute force, examining thousands of moves and using a min-max search algorithm
  2. The Type B – Strategic – Programming would use specialised heuristics to examine only a few key candidate moves. [Washington06]

The best known example of a Type A programme destroying a human competitor is IBM’s Deep Blue. In 1997, it won against World Champion Gary Kasparov.

The Chinese Room Argument

Many may argue that having to analyse millions of moves to make one chess move is not grounds for displaying higher intelligence, though it does make for solid play. This is correlated with the Chinese Room Argument. John R. Searle in 1980 aimed to demonstrate the falsehood of ‘strong’ AI. Strong AI would refer to a cognitive intelligence output by a machine. But how can anyone show it to be false if we don’t know what the human mind’s program is? [5]. The Chinese Room Argument was presented as a ‘thought experiment part’ to support Searl’s thesis that the strength of AI needs to be understood in terms of ‘understanding’ vs ‘following rules’.The experiment, briefly described in the graphic below, would place a human in a room with two slots. Through the first slot, someone outside the room slips the person inside a note with Chinese characters. The person has a huge book of Chinese characters that helps them understand and respond to the note received. The person output’s the answer through the second slot. The conclusive behaviour of this experiment is meant to show that while the human can output in the desired response through information available, there is not level of intrinsic understanding that has a longevity or conscious thought.

Premise of the Chinese Room Argument

Expert systems

Expert systems as sub-set of AI are computer programmes that can provide or model human expertise in one or more knowledge areas. The first expert system called DENDRAL was developed at Stanford University by Edward Feigenbaum, Bruce G. Buchanan, Joshua Lederberg, and Carl Djerassi in 1965. This system automated the decision-making process and problem-solving behavior of organic chemists, with the general aim of studying hypothesis formation and constructing models of empirical induction in science. [6]

Components of Expert System in Artificial Intelligence

Components of an AI Expert System are:

  1. Knowledge base – collection of data made up of factual knowledge and heuristic knowledge
  2. Inference Engine – helps to arrive at a solution by acquiring and manipulating available collection of data. Can resolve conflicts in the reasoning by applying rules
  3. User base – This is the medium of interaction between users. Displayed as natural language on screen or verbal narrations [7]

But expert systems as a whole raises interesting questions around Intuitive Intelligence of Human Beings vs Reasoning Machines. Human learning is a gradual process, and there appears to be no sudden leaping forward from rule-based knowledge to experience-based know-how [Washington06]. This is distinctly different from machines which use pure step-by-step reasoning through logic processing.

AI Winter

The 1970s witnessed the AI Hype going into a Cold War like phase. James Lighthill’s report to the British Science Research Council in 1973 detailed that AI was not as developing or successful as thought to be. This resulted in Government cuts on funding for research in the UK, and with the Automatic Language Processing Advisory Committee’s report in the US.

More insight into the AI Winter

Current day computing has at its advantage enormous processing power coupled with unlimited storage ability. These powerful computing capacities are able to access a depth of data to produce astounding machine learning capabilities. Machine learning is embedded in many of the online services we use today.

From chatbots to opinion analysis, search algorithms and word processing, our digital experiences are determined as much by a machine who learns and can comprehend ‘cognitively’, alongside the human making a decision based on rationality and emotion.


This articles aims to provide an overall history into the concept and development of AI in computing. Questions around AI, its existence, reasoning and depth will appear in later articles.


References

[Washington06] University of Washington. “The History of Artificial Intelligence” December 2006, https://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf

[1] Wikipedia, Timeline of Western Philosophers, https://en.wikipedia.org/wiki/Timeline_of_Western_philosophers

[2] Wikipedia, Alan Turing, http://en.wikipedia.org/wiki/Alan_Turing

[3] TechTarget, LISP, https://searchmicroservices.techtarget.com/definition/LISP-list-processing

[4] Androidpit, What is AI?, https://www.androidpit.com/what-is-artificial-intelligence-history-definitions-and-applications

[5] Cognitive Science, John R. Searl’s Chinese Room, http://www.psych.utoronto.ca/users/reingold/courses/ai/cache/searle.html7]

[6] Forbes, A Very Short History of Artificial Intelligence, https://www.forbes.com/sites/gilpress/2016/12/30/a-very-short-history-of-artificial-intelligence-ai/#73e6738c6fba

[7] DataFlair, What is Expert System in Artificial Intelligence – How it Solve Problems, https://data-flair.training/blogs/expert-system/

Leave your questions in the comments below. Or you can email us too on talk@sociallyacceptable.co.za

Ciao for now.

Socially Acceptable is an African communications and training business that is focused on helping enterprises to build, maintain and profit from online brand development and successful conversion of leads to customers.

With offices in Johannesburg and Durban, we provide services throughout the country and the African continent.

Leave a Reply

Your email address will not be published. Required fields are marked *

Get a Quote