DESCRIBE THE CLASSIFICATION SYSTEM AND ARTIFICIAL INTELLIGENCE


DESCRIBE THE CLASSIFICATION SYSTEM AND ARTIFICIAL INTELLIGENCE
INTRODUCTION
Classification systems are systems with a distribution of classes created according to common relations or affinities. Artificial intelligence is a science and technology based on disciplines such as computer science, biology, psychology, linguistics, mathematics, and engineering. The goal of AI is to develop computers that can think, see, hear, walk, talk and feel. A major thrust of AI is the development of computer functions normally associated with human intelligence, such as reasoning, learning, and problem solving.
CLASSIFICATION SYSTEM AND ARTIFICIAL INTELLIGENCE
Three methods of classification (machine learning) were used to produce a program to choose a detector for ion chromatography (IC). The selected classification systems were: C4.5, an induction method based on an information theory algorithm; INDUCT, which is based on a probability algorithm and a self-organising neural network developed specifically for this application. They differ both in the learning strategy employed to structure the knowledge, and the representation of knowledge acquired by the system, i.e., rules, decision trees and a neural network.
A database of almost 4000 cases, that covered most IC experiments reported in the chemical literature in the period 1979 to 1989, comprised the basis for the development of the system.
Generally, all three algorithms performed very well for this application. They managed to induce rules, or produce a network that had about a 70% success rate for the prediction of detectors reported in the publication and over 90% success for choosing a detector that could be used for the described method. This was considered acceptable due to the nature of the problem domain and that of the training set. Each method effectively handled the very high noise levels in the training set and was able to select the relevant attributes.
Artificial intelligence (AI) is the intelligence exhibited by machines or software. It is also the name of the academic field of study which studies how to create computers and computer software that are capable of intelligent behavior. Major AI researchers and textbooks define this field as “the study and design of intelligent agents”, in which an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1955, defines it as “the science and engineering of making intelligent machines”.
AI research is highly technical and specialized, and is deeply divided into subfields that often fail to communicate with each other.[5] Some of the division is due to social and cultural factors: subfields have grown up around particular institutions and the work of individual researchers. AI research is also divided by several technical issues. Some subfields focus on the solution of specific problems. Others focus on one of several possible approaches or on the use of a particular tool or towards the accomplishment of particular applications.
The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects. General intelligence is still among the field’s long-term goals. Currently popular approaches include statistical methods, computational intelligence and traditional symbolic AI. There are a large number of tools used in AI, including versions of search and mathematical optimization, logic, methods based on probability and economics, and many others. The AI field is interdisciplinary, in which a number of sciences and professions converge, including computer science, mathematics, psychology, linguistics, philosophy and neuroscience, as well as other specialized fields such as artificial psychology.
The field was founded on the claim that a central property of humans, human intelligence—the sapience of Homo sapiens—”can be so precisely described that a machine can be made to simulate it.” This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been addressed by myth, fiction and philosophy since antiquity. Artificial intelligence has been the subject of tremendous optimism but has also suffered stunning setbacks. Today it has become an essential part of the technology industry, providing the heavy lifting for many of the most challenging problems in computer science
Goals
The general problem of simulating (or creating) intelligence has been broken down into a number of specific sub-problems. These consist of particular traits or capabilities that researchers would like an intelligent system to display. The traits described below have received the most attention.
Deduction, reasoning, problem solving
Early AI researchers developed algorithms that imitated the step-by-step reasoning that humans use when they solve puzzles or make logical deductions. By the late 1980s and 1990s, AI research had also developed highly successful methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.
For difficult problems, most of these algorithms can require enormous computational resources – most experience a “combinatorial explosion”: the amount of memory or computer time required becomes astronomical when the problem goes beyond a certain size. The search for more efficient problem-solving algorithms is a high priority for AI research.
Human beings solve most of their problems using fast, intuitive judgements rather than the conscious, step-by-step deduction that early AI research was able to model. AI has made some progress at imitating this kind of “sub-symbolic” problem solving: embodied agent approaches emphasize the importance of sensorimotor skills to higher reasoning; neural net research attempts to simulate the structures inside the brain that give rise to this skill; statistical approaches to AI mimic the probabilistic nature of the human ability to guess.

Knowledge representation

An ontology represents knowledge as a set of concepts within a domain and the relationships between those concepts.
Knowledge representation and knowledge engineering are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects; situations, events, states and time; causes and effects; knowledge about knowledge (what we know about what other people know); and many other, less well researched domains. A representation of “what exists” is an ontology: the set of objects, relations, concepts and so on that the machine knows about. The most general are called upper ontologies, which attempt to provide a foundation for all other knowledge.
Among the most difficult problems in knowledge representation are:
Default reasoning and the qualification problem
Many of the things people know take the form of “working assumptions.” For example, if a bird comes up in conversation, people typically picture an animal that is fist sized, sings, and flies. None of these things are true about all birds. John McCarthy identified this problem in 1969 as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.
The breadth of commonsense knowledge
The number of atomic facts that the average person knows is astronomical. Research projects that attempt to build a complete knowledge base of commonsense knowledge (e.g., Cyc) require enormous amounts of laborious ontological engineering—they must be built, by hand, one complicated concept at a time. A major goal is to have the computer understand enough concepts to be able to learn by reading from sources like the internet, and thus be able to add to its own ontology.
The subsymbolic form of some commonsense knowledge
Much of what people know is not represented as “facts” or “statements” that they could express verbally. For example, a chess master will avoid a particular chess position because it “feels too exposed” or an art critic can take one look at a statue and instantly realize that it is a fake. These are intuitions or tendencies that are represented in the brain non-consciously and sub-symbolically. Knowledge like this informs, supports and provides a context for symbolic, conscious knowledge. As with the related problem of sub-symbolic reasoning, it is hoped that situated AI, computational intelligence, or statistical AI will provide ways to represent this kind of knowledge.
Planning

A hierarchical control system is a form of control system in which a set of devices and governing software is arranged in a hierarchy.

Intelligent agents must be able to set goals and achieve them. They need a way to visualize the future (they must have a representation of the state of the world and be able to make predictions about how their actions will change it) and be able to make choices that maximize the utility (or “value”) of the available choices.
In classical planning problems, the agent can assume that it is the only thing acting on the world and it can be certain what the consequences of its actions may be. However, if the agent is not the only actor, it must periodically ascertain whether the world matches its predictions and it must change its plan as this becomes necessary, requiring the agent to reason under uncertainty.
Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.
Learning
Machine learning is the study of computer algorithms that improve automatically through experience and has been central to AI research since the field’s inception.
Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change. In reinforcement learning the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space. These three types of learning can be analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory.
Within developmental robotics, developmental learning approaches were elaborated for lifelong cumulative acquisition of repertoires of novel skills by a robot, through autonomous self-exploration and social interaction with human teachers, and using guidance mechanisms such as active learning, maturation, motor synergies, and imitation.
Natural language processing (communication)

A parse tree represents the syntactic structure of a sentence according to some formal grammar.
Natural language processing gives machines the ability to read and understand the languages that humans speak. A sufficiently powerful natural language processing system would enable natural language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include information retrieval (or text mining), question answering and machine translation.
A common method of processing and extracting meaning from natural language is through semantic indexing. Increases in processing speeds and the drop in the cost of data storage makes indexing large volumes of abstractions of the user’s input much more efficient.
Perception
Machine perception is the ability to use input from sensors (such as cameras, microphones, tactile sensors, sonar and others more exotic) to deduce aspects of the world. Computer vision is the ability to analyze visual input. A few selected subproblems are speech recognition, facial recognition and object recognition.
Motion and manipulation
The field of robotics is closely related to AI. Intelligence is required for robots to be able to handle such tasks as object manipulation and navigation, with sub-problems of localization (knowing where you are, or finding out where other things are), mapping (learning what is around you, building a map of the environment), and motion planning (figuring out how to get there) or path planning (going from one point in space to another point, which may involve compliant motion – where the robot moves while maintaining physical contact with an object).
Long-term goals
Among the long-term goals in the research pertaining to artificial intelligence are: (1) Social intelligence, (2) Creativity, and (3) General intelligence.

CONCLUSION
Classification system and Artificial intelligence techniques are pervasive and are too numerous to list. Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence; this phenomenon is described as the AI effect. An area that artificial intelligence has contributed greatly to is intrusion detection.

REFERENCES
Wechsler 1958, Chapter 3: The Classification of Intelligence
Matarazzo 1972, Chapter 5: The Classification of Intelligence
Gregory 1995, entry “Classification of Intelligence”
Kamphaus 2005, pp. 518–20 section “Score Classification Schemes”
Gottfredson 2009, pp. 31–32
Hunt 2011, p. 5 “As mental testing expanded to the evaluation of adolescents and adults, however, there was a need for a measure of intelligence that did not depend upon mental age. Accordingly the intelligence quotient (IQ) was developed. … The narrow definition of IQ is a score on an intelligence test … where ‘average’ intelligence, that is the median level of performance on an intelligence test, receives a score of 100, and other scores are assigned so that the scores are distributed normally about 100, with a standard deviation of 15. Some of the implications are that: 1. Approximately two-thirds of all scores lie between 85 and 115. 2. Five percent (1/20) of all scores are above 125, and one percent (1/100) are above 135. Similarly, five percent are below 75 and one percent below 65.”
Terman 1916, p. 79 “What do the above IQ’s imply in such terms as feeble-mindedness, border-line intelligence, dullness, normality, superior intelligence, genius, etc.? When we use these terms two facts must be born in mind: (1) That the boundary lines between such groups are absolutely arbitrary, a matter of definition only; and (2) that the individuals comprising one of the groups do not make up a homogeneous type.”
Wechsler 1939, p. 37 “The earliest classifications of intelligence were very rough ones. To a large extent they were practical attempts to define various patterns of behavior in medical-legal terms.”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s