Kollias, P. & McClelland, J.
Stanford University
We present a PDP model that solves verbal analogy problems [A:B as C:(?) with alternatives D1 or D2], such as [pig:boar as dog:(?) with alternatives wolf or cat]. We train a recurrent network to complete item-relation-item triples when given any 2 of a triple's elements as inputs (the network has an input pool for each element and a single hidden layer; Hinton, 1981). In testing, two copies of the trained network share a common relation pool, so that both the A and B terms and the C and one candidate D term jointly constrain the search for a relation. While the A, B, and C items are clamped for the entire testing process the D items are clamped only at the beginning; the D alternative with the strongest "echo" of activation at the end of testing is chosen. Without training on analogy problems per se, the model explains the developmental shift from associative to relational responding as an emergent consequence of learning. Such learning allows gradual, item-specific acquisition of relational knowledge (e.g., the relation "domesticated form of" shared by pig:boar and dog:wolf) to overcome the influence of unbalanced association frequency (which favors dog:cat in the example), accounting for the item frequency sensitivity of analogical reasoning seen in cognitive development. The network also captures the overall degradation in performance after anterior temporal damage by deleting a fraction of learned connections, while capturing the return of associative dominance after frontal damage by treating frontal structures as necessary for maintaining activation of A and B while seeking a relation between C and D. While our theory is still far from complete it provides a unified explanation of findings not previously considered together.