Tovar, A. E. 1, 2 & Westermann, G. 2
1 Facultad de Psicología, Universidad Nacional Autónoma de México
2 Department of Psychology, Lancaster University, Lancaster, UK
When human participants are trained to learn relations between stimuli of the kind A is related with B (ArB), and B is related with C (BrC), they are able to derive that A is related with C (ArC) without further training. Many behavioral studies on Stimulus Equivalence (SE) have been concerned with the prerequisites for derived transitivity relations to ?emerge? in the behavioral repertoire of participants, and with describing some features about the derived relations; for example, an initial weakness in ArC (derived) compared with ArB (trained).
Some artificial neural network models working with backpropagation-learning have addressed SE tasks; however, the limited biological plausibility of these models motivates the search for more realistic algorithms.
Our purpose is to show that a simple Hebbian learning algorithm can be used to simulate SE. In line with evidence suggesting that simple co-occurrence of stimuli is sufficient for human participants to learn and derive stimulus relations; we present a Hebbian neural network model that learns to relate different stimuli by extracting co-occurrence information. A core aspect of the model is the maintenance of homeostasis in the connection weights, following empirical evidence of this property of biological neural networks. In the model, the weights between co-occurring stimuli are strengthened, but as activation propagates through the network further connections are changed that do not correspond to directly co-occurring stimulus pairs, leading to the realistic learning of transitivity relations as described above.
This biologically plausible algorithm can account for the kind of outputs regularly seen in behavioral studies. Specifically, the model showed a ?nodal distance effect? consisting in stronger increase in the connections linking more adjacent derived pairs (e.g., ArC with one intermediary) in comparison with more distant derived pairs (e.g., ArD with two intermediaries). We discuss an extension of the model to simulate lexical priming.