Itzuli Hizlari gonbidatua: Arnaud Destrebecqz. How do we learn statistically: an attempt to compare bracketing and clustering models

Arnaud Destrebecqz. How do we learn statistically: an attempt to compare bracketing and clustering models

2024/6/13
- BCBL auditorium (and BCBL zoom room 2)

What: How do we learn statistically: an attempt to compare bracketing and clustering models

Where: BCBL Auditorium and zoom room # 2 (If you would like to attend to this meeting reserve at info@bcbl.eu)

Who: Professor Arnaud Destrebecqz, PhD, Centre for Research in Cognition & Neurosciences, Faculté des Sciences Psychologiques et de l'Education, Université Libre de Bruxelles (ULB), Bruxelles, Belgium.

When:  Thursday,  June 13, 12 PM noon.

What is the nature of the representations acquired in implicit statistical learning? Results have shown that adults and infants are able to find the words of an artificial language when exposed to a continuous auditory sequence consisting in a random ordering of these words. Such performance can only be based on processing the transitional probabilities between sequence elements. Two different kinds of mechanisms may account for these data however: Participants may either parse the sequence into smaller chunks corresponding to the words of the artificial language, or they may become progressively sensitive to the actual values of the transitional probabilities between syllables. The two accounts are difficult to differentiate because they make similar predictions in comparable experimental settings. In this study, we present two experiments that aimed at contrasting these two theories. In these experiments, participants had to learn 2 sets of pseudo-linguistic regularities: Language 1 (L1) and Language 2 (L2) presented in the context of a serial reaction time task. L1 and L2 were either unrelated (none of the syllabic transitions of L1 were present in L2), or partly related (some of the intra-words transitions of L1 were used as inter-words transitions of L2). The two accounts make opposite predictions in these two settings. Our results suggest that the nature of the representations depends on the learning condition. When cues were presented to facilitate parsing of the sequence, participants learned the words of the artificial language. However, when no cues were provided, performance was strongly influenced by the employed transitional probabilities.