Archive

Posts Tagged ‘Knowledge Management’

Scripts

September 4, 2010 Leave a comment

Scripts were developed in the early AI work by Roger Schank, Robert P. Abelson and their research group, and are a method of representing procedural knowledge. They are very much like frames, except the values that fill the slots must be ordered.

The classic example of a script involves the typical sequence of events that occur when a person dines in a restaurant: finding a seat, reading the menu, ordering drinks from the wait staff… In the script form, these would be decomposed into conceptual transitions, such as MTRANS and PTRANS, which refer to mental transitions and physical transitions.

Schank, Abelson and their colleagues tackled some of the most difficult problems in artificial intelligence (i.e., story understanding), but ultimately their line of work ended without tangible success. This type of work received little attention after the 1980s, but it is very influential in later knowledge representation techniques, such as case-based reasoning.

Scripts can be inflexible. To deal with inflexibility, smaller modules called memory organization packets (MOP) can be combined in a way that is appropriate for the situation.

Semantic Nets

September 4, 2010 Leave a comment
An example of a semantic network

Image via Wikipedia

A semantic network or net is a graphic notation for representing knowledge in patterns of interconnected nodes and arcs. Computer implementations of semantic networks were first developed for artificial intelligence and machine translation, but earlier versions have long been used in philosophy, psychology, and linguistics.

What is common to all semantic networks is a declarative graphic representation that can be used either to represent knowledge or to support automated systems for reasoning about knowledge. Some versions are highly informal, but other versions are formally defined systems of logic. Following are six of the most common kinds of semantic networks, each of which is discussed in detail in one section of this article.

  • A definitional network emphasizes the subtype or is-a relation between a concept type and a newly defined subtype. The resulting network, also called a generalization or subsumption hierarchy, supports the rule of inheritance for copying properties defined for a super type to all of its subtypes. Since definitions are true by definition, the information in these networks is often assumed necessarily true.
  • Assertional networks are designed to assert propositions. Unlike definitional networks, the information in an assertional network is assumed contingently true, unless it is explicitly marked with a modal operator. Some assertional networks have been proposed as models of the conceptual structures underlying natural language semantics.
  • Implicational networks use implication as the primary relation for connecting nodes. They may be used to represent patterns of beliefs, causality, or inferences.
  • Executable networks include some mechanism, such as marker passing or attached procedures, which can perform inferences, pass messages, or search for patterns and associations.
  • Learning networks build or extend their representations by acquiring knowledge from examples. The new knowledge may change the old network by adding and deleting nodes and arcs or by modifying numerical values, called weights, associated with the nodes and arcs.
  • Hybrid networks combine two or more of the previous techniques, either in a single network or in separate, but closely interacting networks.

Some of the networks have been explicitly designed to implement hypotheses about human cognitive mechanisms, while others have been designed primarily for computer efficiency. Sometimes, computational reasons may lead to the same conclusions as psychological evidence. The distinction between definitional and assertional networks, for example, has a close parallel to Tulving’s (1972) distinction between semantic memory and episodic memory.

Different knowledge representation techniques

September 4, 2010 1 comment

There are representation techniques such as frames, rules, tagging, and semantic networks, which have originated from theories of human information processing. Since knowledge is used to achieve intelligent behavior, the fundamental goal of knowledge representation is to represent knowledge in a manner as to facilitate inferencing (i.e. drawing conclusions) from knowledge.

Some issues that arise in knowledge representation from an AI perspective are:

  • How do people represent knowledge?
  • What is the nature of knowledge?
  • Should a representation scheme deal with a particular domain or should it be general purpose?
  • How expressive is a representation scheme or formal language?
  • Should the scheme be declarative or procedural?

There has been very little top-down discussion of the knowledge representation (KR) issues and research in this area is a well-aged quillwork. There are well known problems such as “spreading activation” (this is a problem in navigating a network of nodes), “subsumption” (this is concerned with selective inheritance; e.g. an ATV can be thought of as a specialization of a car but it inherits only particular characteristics) and “classification.” For example, a tomato could be classified both as a fruit and as a vegetable.

In the field of artificial intelligence, problem solving can be simplified by an appropriate choice of knowledge representation. Representing knowledge in some ways makes certain problems easier to solve. For example, it is easier to divide numbers represented in Hindu-Arabic numerals than numbers represented as Roman numerals.

Types of Knowledge

September 4, 2010 1 comment
  • Static knowledge – unlikely to change
  • Dynamic knowledge – records in a database
  • Surface knowledge – Accumulated through experience
  • Deep Knowledge – Theories/Proofs/Problem Specifics
  • Procedural knowledge – Describes how a problem is solved.
  • Declarative knowledge – Describes what is known about a problem
  • Meta- knowledge – Describes knowledge about knowledge
  • Heuristic knowledge – describes a rule of thumb that guide the reasoning process

History of Knowledge Representation

September 4, 2010 Leave a comment

In computer science, particularly artificial intelligence, a number of representations have been devised to structure information.

KR is most commonly used to refer to representations intended for processing by modern computers, and in particular, for representations consisting of explicit objects (the class of all elephants, or Clyde a certain individual), and of assertions or claims about them (‘Clyde is an elephant’, or ‘all elephants are grey’). Representing knowledge in such explicit form enables computers to draw conclusions from knowledge already stored (‘Clyde is grey’).

Many KR methods were tried in the 1970s and early 1980s, such as heuristic question-answering, neural networks, theorem proving, and expert systems, with varying success. Medical diagnosis (e.g., Mycin) was a major application area, as were games such as chess.

In the 1980s, formal computer knowledge representation languages and systems arose. Major projects attempted to encode wide bodies of general knowledge; for example the “Cyc” project (still ongoing) went through a large encyclopedia, encoding not the information itself, but the information a reader would need in order to understand the encyclopedia: naive physics; notions of time, causality, motivation; commonplace objects and classes of objects.

Through such work, the difficulty of KR came to be better appreciated. In computational linguistics, meanwhile, much larger databases of language information were being built, and these, along with great increases in computer speed and capacity, made deeper KR more feasible.

Several programming languages have been developed that are oriented to KR. Prolog developed in 1972, but popularized much later, represents propositions and basic logic, and can derive conclusions from known premises. KL-ONE (1980s) is more specifically aimed at knowledge representation itself. In 1995, the Dublin Core standard of metadata was conceived.

In the electronic document world, languages were being developed to represent the structure of documents, such as SGML (from which HTML descended) and later XML. These facilitated information retrieval and data mining efforts, which have in recent years begun to relate to knowledge representation.

Knowledge Representation & Reasoning

September 4, 2010 Leave a comment

Knowledge representation and reasoning is an area of artificial intelligence whose fundamental goal is to represent knowledge in a manner that facilitates inferencing (i.e. drawing conclusions) from knowledge. It analyzes how to formally think – how to use a symbol system to represent a domain of discourse (that which can be talked about), along with functions that allow inference (formalized reasoning) about the objects. Some kind of logic is used to both supply formal semantics of how reasoning functions apply to symbols in the domain of discourse, as well as to supply operators such as quantifiers, modal operators, etc. that, along with an interpretation theory, give meaning to the sentences in the logic.

When we design a knowledge representation (and a knowledge representation system to interpret sentences in the logic in order to derive inferences from them), we have to make choices across a number of design spaces. The single most important decision to be made is the expressivity of the KR. The more expressive, the easier and more compact it is to “say something”. However, more languages that are expressive are harder to automatically derive inferences from. An example of a less expressive KR would be propositional logic. An example of a more expressive KR would be auto epistemic temporal modal logic. Less expressive KRs may be both complete and consistent (formally less expressive than set theory). KRs that are more expressive may be neither complete nor consistent.