Artificial intelligence
From Wikipedia, the free encyclopedia
- "AI" redirects here. For other uses, see AI (disambiguation).
Artificial intelligence (abbreviated AI) is defined as intelligence exhibited by an artificial entity. Such a system is generally assumed to be a computer.
Although AI has a strong science fiction connotation, it forms a vital branch of computer science, dealing with intelligent behavior, learning and adaptation in machines. Research in AI is concerned with producing machines to automate tasks requiring intelligent behavior. Examples include control, planning and scheduling, the ability to answer diagnostic and consumer questions, handwriting, speech, and facial recognition. As such, it has become a scientific discipline, focused on providing solutions to real life problems. AI systems are now in routine use in economics, medicine, engineering and the military, as well as being built into many common home computer software applications, traditional strategy games like computer chess and other video games.
For topics relating specifically to true (human-like) intelligence, see Strong AI.
Contents |
Schools of thought
AI divides roughly into two schools of thought: Conventional AI and Computational Intelligence (CI).
Conventional AI mostly involves methods now classified as machine learning, characterized by formalism and statistical analysis. This is also known as symbolic AI, logical AI, neat AI and Good Old Fashioned Artificial Intelligence (GOFAI). (Also see semantics.) Methods include:
- Expert systems: apply reasoning capabilities to reach a conclusion. An expert system can process large amounts of known information and provide conclusions based on them.
- Case based reasoning
- Bayesian networks
- Behavior based AI: a modular method of building AI systems by hand.
Computational Intelligence involves iterative development or learning (e.g. parameter tuning e.g. in connectionist systems). Learning is based on empirical data and is associated with non-symbolic AI, scruffy AI and soft computing. Methods mainly include:
- Neural networks: systems with very strong pattern recognition capabilities.
- Fuzzy systems: techniques for reasoning under uncertainty, has been widely used in modern industrial and consumer product control systems.
- Evolutionary computation: applies biologically inspired concepts such as populations, mutation and survival of the fittest to generate increasingly better solutions to the problem. These methods most notably divide into evolutionary algorithms (e.g. genetic algorithms) and swarm intelligence (e.g. ant algorithms).
With hybrid intelligent systems attempts are made to combine these two groups. Expert inference rules can be generated through neural network or production rules from statistical learning such as in ACT-R.
A promising new approach called intelligence amplification tries to achieve artificial intelligence in an evolutionary development process as a side-effect of amplifying human intelligence through technology.
History
- Main article: History of artificial intelligence
Early in the 17th century, René Descartes proposed that bodies of animals are nothing more than complex machines, thus formulating the mechanistic theory, also known as the "clockwork paradigm". Blaise Pascal created the first mechanical digital calculating machine in 1642. In the 19th century, Charles Babbage and Ada Lovelace worked on programmable mechanical calculating machines.
Bertrand Russell and Alfred North Whitehead published Principia Mathematica in 1910-1913, which revolutionized formal logic. Warren McCulloch and Walter Pitts then published A Logical Calculus of the Ideas Immanent in Nervous Activity (1943), laying the foundations for neural networks. Norbert Wiener's Cybernetics or Control and Communication in the Animal and the Machine (MIT Press, 1948) popularizes the term "cybernetics".
The 1950s were a period of active efforts in AI. The first working AI programs were written in 1951 to run on the Ferranti Mark I machine of the University of Manchester: a draughts-playing program written by Christopher Strachey and a chess-playing program written by Dietrich Prinz. John McCarthy coined the term "artificial intelligence" at the first conference devoted to the subject, in 1956. He also invented the Lisp programming language. Alan Turing introduced the "Turing test" as a way of operationalizing a test of intelligent behavior. Joseph Weizenbaum built ELIZA, a chatterbot implementing Rogerian psychotherapy.
At the same time, John von Neumann, who had been hired by the RAND Corporation, developed the game theory, which would prove invaluable in the progress of AI research.
During the 1960s and 1970s, Joel Moses demonstrated the power of symbolic reasoning for integration problems in the Macsyma program, the first successful knowledge-based program in mathematics. Leonard Uhr and Charles Vossler published "A Pattern Recognition Program That Generates, Evaluates, and Adjusts Its Own Operators" in 1963, which described one of the first machine learning programs that could adaptively acquire and modify features and thereby overcome the limitations of simple perceptrons of Rosenblatt. Marvin Minsky and Seymour Papert published Perceptrons, which demonstrated the limits of simple neural nets. Alain Colmerauer developed the Prolog computer language. Ted Shortliffe demonstrated the power of rule-based systems for knowledge representation and inference in medical diagnosis and therapy in what is sometimes called the first expert system. Hans Moravec developed the first computer-controlled vehicle to autonomously negotiate cluttered obstacle courses.
In the 1980s, neural networks became widely used due to the backpropagation algorithm, first described by Paul John Werbos in 1974. The 1990s marked major achievements in many areas of AI and demonstrations of various applications. Most notably Deep Blue, a chess-playing computer, beat Garry Kasparov in a famous six-game match in 1997. DARPA stated that the costs saved by implementing AI methods for scheduling units in the first Gulf War have repaid the US government's entire investment in AI research since the 1950s.
The DARPA Grand Challenge, started in 2004 and continued to this day, is a race for a $2 million prize where cars drove themselves without any communication with humans, using GPS, computers and a sophisticated array of sensors across several hundred miles of challenging desert terrain.
In the post-dot com boom era, websites such as 'Ask Jeeves' and 'Ask Cheggers.com' have sprung up that use a simple form of AI to provide answers to questions by searching the internet.
Philosophy
- Main article: Philosophy of artificial intelligence
The strong AI vs. weak AI debate is still a hot topic amongst AI philosophers. This involves philosophy of mind and the mind-body problem. Most notably Roger Penrose in his book The Emperor's New Mind and John Searle with his "Chinese room" thought experiment argue that true consciousness cannot be achieved by formal logic systems, while Douglas Hofstadter in Gödel, Escher, Bach and Daniel Dennett in Consciousness Explained argue in favour of functionalism. In many strong AI supporters’ opinion, artificial consciousness is considered as the holy grail of artificial intelligence.
Science fiction
In science fiction AI—almost always strong AI—is commonly portrayed as an upcoming power trying to overthrow human authority as in HAL 9000, Skynet, Colossus and The Matrix or as service humanoids like C-3PO, Data, the Bicentennial Man, the Mechas in A.I. or Sonny in I, Robot.
The inevitability of world domination by rampant AI, sometimes called "the Singularity", is also argued by some science writers like Isaac Asimov, Vernor Vinge and Kevin Warwick. In works such as the Japanese manga Ghost in the Shell, the existence of intelligent machines questions the definition of life as organisms rather than a broader category of autonomous entities, establishing a notional concept of systemic intelligence. See list of fictional computers and list of fictional robots and androids.
The BBC television series Blake's 7 featured a number of intelligent computers, including Zen, the controlling computer of the starship Liberator; Orac, a highly advanced supercomputer in a portable perspex case that had the ability to reason and even to predict the future; and Slave, the computer on the starship Scorpio.
See also
- Cognitive science
- Fifth generation computer
- Neuromancer
- Three Laws of Robotics
- AI effect
- Viking Youth Power Hour interview with Ken Gumbs director of "Building Gods"
Typical problems to which AI methods are applied:
- Natural language processing, Translation and Chatterbots
- Non-linear control and Robotics
- Computer vision, Virtual reality and Image processing
- Game theory and Strategic planning
- Game AI and Computer game bot
- Artificial Creativity
Other fields in which AI methods are implemented:
- Automated reasoning
- Automation
- Behavior-based robotics
- Bio-inspired computing
- Chatbot
- Cognitive robotics
- Cybernetics
- Data mining
- Developmental robotics
- Evolutionary robotics
- Hybrid intelligent system
- Intelligent agent
- Intelligent control
- Knowledge Representation
- Semantic web