Artificial intelligence makes headlines almost daily. Whether it’s a smart watch, smart home device, or just a casual chat with Siri, almost all of us encounter and/or rely on artificial intelligence daily. Some forecasters think that AI could meet or exceed human intelligence within the next 10 years or so – although a NY Times Op-Ed published on November 5, 2018 says that’s over optimistic.
While the future is uncertain, we can be assured that AI has improved immensely in just the past 20 years, becoming a $3.5 billion global market. Below is an excerpt from analyst Sangeeta Rai’s report about AI in healthcare. At the bottom of the article, you will find more BCC Research reports related to this topic.
Although the term “artificial intelligence” was coined in the 1950s, the earliest studies of the nature of knowledge and reasoning started thousands of years before the 20th century. Intelligent artifacts appear in Greek mythology. The idea of developing ways to perform reasoning automatically and efforts to build automata to perform tasks, such as game-playing, date back hundreds of years. Psychologists have studied human cognition for many years that could provide insight about the nature of human intelligence. Philosophers have analyzed the nature of knowledge, have studied the mind-body problem of how mental states relate to physical processes and have explored formal frameworks for deriving conclusions.
The advent of electronic computers, however, provided a revolutionary advance in the ability to study intelligence by building intelligent artifacts, systems to perform complex reasoning tasks as well as observing and experimenting with behavior to identify fundamental principles. In 1950, a landmark paper by Alan Turing argued for the possibility of building intelligent computing systems. That paper proposed an operational test for comparing the intellectual ability of humans and AI systems, now generally called the “Turing Test.” In the Turing Test, a judge uses a teletype to communicate with two players in other rooms: a person and a computer. The judge knows the players only by anonymous labels, such as “player A” and “player B,” on the text that they send to him. By typing questions to the players and examining their answers, the judge attempts to decide which user is a computer and which is a human. Both the human and machine try to convince the questioner that they are the human; the goal for the machine is to answer so that the judge cannot reliably distinguish which is which.
The significance of the Turing Test has been controversial. Some, both inside and outside the AI field, have believed that building a system to pass the Turing Test should be the goal of AI. Others, however, reject the goal of developing systems to imitate human behavior. Ford and Hayes illustrate this point with an analogy between developing artificial intelligence and developing mechanical flight. Early efforts at mechanical flight were based on trying to imitate the flight of birds. At that time birds were the only available examples of flight. How birds flew was not understood, but their observed features (aspects such as beaks, feathers and flapping wings) could be imitated, and become models for aircraft (even to the extent of airplanes with beaks were featured in a 1900s textbook on aircraft design). However, success at mechanical flight depended on replacing attempts at imitation with study of the functional requirements for flight and the development of aircraft that used all available methods to achieve them. In addition, passing the Turing Test is not a precondition for developing useful practical systems. For example, an intelligent system to aid doctors or to tutor students can have enormous practical impact with only the ability to function in a specific, limited domain.
Early AI research rapidly developed systems to perform a wide range of tasks often associated with intelligence in people, including theorem-proving in geometry, symbolic integration, solving equations and even solving analogical reasoning problems found on human intelligence tests. However, research revealed that methods that were effective with small sample domains might not scale up to larger and more complex tasks which led to an awareness of the enormous difficulty of the problems that the field aimed to address. A classic example concerns early work in machine translation, which was recognized in the 1960s to be a far more difficult problem than expected. Funding for machine translation research was terminated.
Two impediments to wider application of early AI systems were their general methods and lack of knowledge. For small tasks, exhaustively considering possibilities may be practical, but for rich tasks, specialized knowledge is needed to focus reasoning. This observation led to research on knowledge-based systems, which demonstrated that there is an important class of problems requiring deep but narrow knowledge. Systems capturing this knowledge in the form of rules can achieve expert-level performance for these tasks.
An early example, DENDRAL, used rules about mass spectrometry and other data to hypothesize structures for chemical compounds. Using only simple inference methods, it achieved expert-level performance and was the source of results published in chemical literature. Such systems provided the basis for numerous applied AI systems. Continuing research revealed the need to develop additional methods for tasks, such as acquiring the knowledge for systems to use, dealing with incomplete or uncertain information and automatically adapting to new tasks and environments.