Over the net ...
For some strange reason, Netscape Communicator cannot run the code. If you have a PC, Internet Explorer runs the code without any problems.
When ready to execute the TAM-WG model, just click on the hippocampus below.
Downloading ...
You can also download the tar file TAM_WG.tar.gz (170 Kbytes) containing all the java classes (JDK 1.0.2) for the TAM-WG model into your machine. Use gunzip to unzip the file:
gunzip TAM_WG.tar.gz
and TAR:
tar xvf TAM_WG.tar
to extract the files. You will then be able to execute the model by using appletviewer (no flickering):
av tam_wg_model.html
The file tam_wg_model.html is contained in the tar file.
ATTENTION! An updated version of TAM-WG containing many new experiments will be available soon (Java 1.2).
Currently, TAM-WG implements four different behavioral experiments, which were originally described in:
Hirsh, R., Leber, B., and Gillman, K. (1978).
Fornix fibers and motivational states as controllers of behavior: A study
stimulated by the contextual retrieval theory. Behavioral Biology
22:463-478.
The model allows you to choose one of the following experiments:
Hirsh et al. Exp. (1978): Fornix-Lesioned
O'Keefe Exp. (1983): Control
O'Keefe Exp. (1983): Fornix-Lesioned
If you select any fornix-lesioned group, TAM will be executed normally, since there will be no pre-training phase.
Once you select one experiment (and the pre-training phase is over for the control groups), a window displaying the maze environment will be created. Now, at any time, you can choose to see any of the other windows available in the system, these are called Output Windows and can be selected from the main window (shown above). The available "output" windows are:
Curiosity:
displays the activity bumps associated to the unknow;
Drives:
displays the drive levels. Four drives are implemented, but the experiments
currently available only use hunger and thirst. The values
associated to these drives can be changed anytime using the correspondent
drive slider available in the main window;
Head
Direction: Displays the bump of activity related to the animal's current
head direction;
Head
Direction Code: Displays the matrix of activations of the Head Direction
feature detector layer of neurons (only available for the control groups);
Obstacles:
Displays the activity bumps associated with obstacles encountered in the
environment;
Output:
Displays useful messages about the execution of the program;
Place
Code: Displays the activity levels of the place layer (only available
for the control groups);
Reinforcement:
Displays the reinforcements weights for the taxon system in the form of
activity bumps;
Rewarding
Stimuli: Displays the attractant field related to, for example, food
once this is perceived;
Rewardness
Expectation: Displays the center of mass of the reinforcement activity
for the taxon system;
The
Maze Environment: Displays the maze environment where the experiment
takes place;
Turning
Angle: Displays the orientation the animal will take in its next step;
Walls:
Displays the activity related to the sensing of the environment walls;
Walls
Code: Displays the matrix of activations ofthe Walls feature detector
layer of neurons (only available for the control groups, i.e., it is a
WG related window);
WG:
Displays the World Graph;
WG
Influence: Displays the activity bumps related to the learning of future
rewards coded in the animal's world graph.
Page under construction!
University
of Southern California Brain Simulation Lab
All contents copyright (C) 1994-1997. University of Southern California Brain Simulation Lab All rights reserved.Author: Alex Guazzelli <aguazzel@rana.usc.edu>