Paul G. Joseph
December 10, 2010
A key problem in AI, one well known to “philosophers of mind” is that computers at present do not give the objects they manipulate any “meaning.” A large body of work exists in philosophy as to what is the meaning of meaning. One concept is that the meaning of an object is the value it has to us in terms of its contribution to our mastering our environment and following from this that meaning is intrinsically emotional in terms of its direct impact on survival. Work in neuroscience indicates mammals have seven systems that when stimulated result in behaviors we call emotions. This project models these emotions using a seven variable dynamical system and uses interactions with objects to assign these variables values, and hence meaning.
- A heuristic is used by the agent to determine the next action to take. The search heuristic consists of the current "happiness" state of the agent plus the additional happiness that it will accrue for taking the next step. The next step taken by the agent is the one that maximizes it's happiness. From a broad search perspective, the agent is a "utility based reflex" agent using a "best first search." The goals keep changing depending on the agents current state as determined by the dynamical system. For example, if FEAR predominates, the goal is to escape and so on.
At some future time, learning can be easily incorporated by storing for example the ghost, and attributes (emotions) that the ghost last elicits. Then the next time Pacman sees a ghost, it will find it in its "memory" and instead of learning from scratch, will be able to use past learning. Also, by setting a higher goal such as "species survival", it may be possible to use Genetic Algorithms to drive the composition of the dynamical system so as to optimize its goal. The immediate next step though is to explain the puzzling results found even with the simple linear model.
The project seeks to explore the meaning of meaning i.e. meta meaning, in a simple framework. By so doing it seeks to directly address a key obstacle to "hard AI" which is, that at present, computers don't know the "meaning" of the objects they manipulate. This project offers the opportunity to study questions related to this issue using a simple, light weight framework. As the "emotional needs" of the agent as represented in the dynamical systems model change, so too does its immediate goals, allowing it to fluidly react to a wide range of changing circumstances.
Technology Used Block Diagram
The model is implemented in the Pacman game used as part of the course and written in Python. Instead of a technology block diagram, I have shown the heuristic used.
Evaluation of Results
The variation of the variables that determine emotions show that in general their behavior trended as expected, but it is too early to say if these values mean anything. Even with a simple linear model, puzzling output values were found--FEAR and CARE seemed to have values that fell into two "bands"; the values of RAGE and PANIC had values that fell into three "bands."
In all cases, Pacman ultimately died (as one would expect with no learning in place). However, even in this simple model, deep questions are raised such as: Is this real meaning or just a sophisticated algorithm. How do we objectively evaluate the results? Do we need a body to feel meaning? Does the body need to be made of flesh and blood or can it be a machine? By creating a framework to address these questions, the hope is that the objections of the phenomenologists to AI can be studied and if possible, leveraged. Even with a simple model, intriguing questions that are at the heart of "hard AI" come into direct focus and this project appears to provide a convenient framework in which to start to do this.