Varona-Moya, S. & Cobos, P. L.
Department of Basic Psychology, Faculty of Psychology, University of Málaga (Spain)
We carried out a review of Hinton?s work ``Learning Distributed Representations of Concepts´´, in which a five-layer feed-forward perceptron?s capacity to make analogy-based generalizations between two isomorphic family trees was assessed. In our review, 500 simulations were subjected to a corrected version of his generalization test. This version was constituted by two items that induced only structure-based inferences. The performance results were quite disappointing: only 33 simulations were able to produce the correct output for both test items, while 194 could not answer any of them. In order to understand why so few simulations had benefited from learning an isomorphic family tree, we turned to principal component analysis (PCA) of the internal distributed representations of concepts. Specifically, we visualized these representations as points in a PCA-based three-dimensional space. This analysis revealed that geometrical alignments of concepts occasionally emerged in the network?s hidden layers. According to these alignments, analogous entities from different domains were spatially matched together. The likely influence of these alignments on the network?s analogy-based generalization capacity was examined by means of an ANOVA test, whose results indicated that simulations in which such alignments had emerged performed significantly better on the generalization test than those in which internal representational spaces were less organized. Moreover, the low proportion of simulations with highly organized representational spaces accounted for the poor performance results on our generalization test. Therefore, the capacity of the network to make analogical inferences seems to be determined by the probability that structurally-aligned representations of concepts are formed in its hidden layers.