THEO is a frame-type cognitive architecture developed as a framework for self-improving systems by a research group of Tom M. Mitchell (Machine Learning Department, School of Computer Science at Carnegie Mellon University, Pittsburgh, PA, USA).
Когнитивная архитектура THEO разработана исследовательской группой Тома Митчелла (Tom M. Mitchell) в Университете Карнеги-Меллона (Computer Science Department, School of Computer Science, Carnegie Mellon University), Питтсбург, Пенсильвания, США.
Integrated AI Architecture
Theo stores all knowledge in a frame representation. This uniform representation allows all knowledge to be accessed and manipulated. Frames are composed of a slot and a value, which may be a reference to another frame, a constant, or missing.
Frames represent concepts. A frame with a slot and a value is said to be a belief. A frame with a missing value is said to be a problem. Such a frame triggers problem solving as Theo attempts to infer the slot value.
Theo infers slot values of an entity by a three-step process. The first step is to directly calculate values via a LISP function in the TOGET slot of the entity. If there is no TOGET slot, the next step uses a list of methods to infer values.
Default.Value - Using the default value of the slot
Inherits - Traversing up the hierarchy until a value is found
Defines - Calculating a value based on other known slot values for <entity>
A statistical inference learning module (SE) attempts to order these methods, placing the methods most likely to succeed early in the list, so a value is produced in optimal time. The final step, implemented by the defines method, attempts to calculate the value either by definitions or by previous explanation-based learning.
During inference, Theo saves an explanation of how each slot received its value. These explanations serve as a simple truth maintenence mechanism. When any slot value in the system changes, then the values of all dependent slots are marked invalid, and recalculated when needed. The explanation is also generalized by a module named TMAC to create general macro-methods for the computation of similar slot values.
When a value has been successfully inferred, Theo caches the result, allowing efficient knowledge access. However, the caching of every single inferred value creates a huge knowledge base requiring special management techniques.
The three learning mechanisms, SE, TMAC, and caching, operate independently of each other, but in certain cases the results of one mechanism can interfere with the functioning of the other mechanisms.
Selected publications by Tom Mitchell
- [Mitchell 90] Mitchell, Y (1990), Becoming increasingly reactive. In Proc. AAAI-90, p. 1051-1058.
- [Mitchell et al 91] Mitchell, T M, Allen, J, Chalasani, P, Cheng, J, Etzioni, O, Ringuette, M, Schlimmer, J C (1991), Theo: a framework for self-improving systems. In K Van Lenhn (Ed.) Architectures for Intelligence. Laurence Erlbaum, Hillsdale, NJ, 1991.
- Toward an Architecture for Never-Ending Language Learning. A. Carlson, J. Betteridge, B. Kisiel, B. Settles, E.R. Hruschka Jr. and T.M. Mitchell. In Proceedings of the Conference on Artificial Intelligence (AAAI), 2010.
- Coupling Semi-Supervised Learning of Categories and Relations, Andrew Carlson, Justin Betteridge, Estevam R. Hruschka Jr. and Tom M. Mitchell, Proceedings of the NAACL HLT 2009 Workshop on Semi-supervised Learning for Natural Language Processing, June 2009.
- "Hidden Process Models," R. A. Hutchinson, T. M Mitchell, and I. Rustandi, Proceedings of the International Conference on Machine Learning, Pittsburgh, PA, June 2006.
- "Models of Learning Systems," B.G. Buchanan, T.M. Mitchell, R.G. Smith, C.R. Johnson, in Encyclopedia of Computer Science and Technology, vol. 11, Marcel Dekker, New York, NY, pp. 24-51. 1978.