Displaying 1 - 46 of 46
  • Araújo, S., Huettig, F., & Meyer, A. S. (2021). What underlies the deficit in rapid automatized naming (RAN) in adults with dyslexia? Evidence from eye movements. Scientific Studies of Reading, 25(6), 534-549. doi:10.1080/10888438.2020.1867863.

    Abstract

    This eye-tracking study explored how phonological encoding and speech production planning for successive words are coordinated in adult readers with dyslexia (N = 22) and control readers (N = 25) during rapid automatized naming (RAN). Using an object-RAN task, we orthogonally manipulated the word-form frequency and phonological neighborhood density of the object names and assessed the effects on speech and eye movements and their temporal coordination. In both groups, there was a significant interaction between word frequency and neighborhood density: shorter fixations for dense than for sparse neighborhoods were observed for low-, but not for high-frequency words. This finding does not suggest a specific difficulty in lexical phonological access in dyslexia. However, in readers with dyslexia only, these lexical effects percolated to the late processing stages, indicated by longer offset eye-speech lags. We close by discussing potential reasons for this finding, including suboptimal specification of phonological representations and deficits in attention control or in multi-item coordination.
  • Arunkumar, M., Van Paridon, J., Ostarek, M., & Huettig, F. (2021). Do illiterates have illusions? A conceptual (non)replication of Luria (1976). Journal of Cultural Cognitive Science, 5, 143-158. doi:10.1007/s41809-021-00080-x.

    Abstract

    Luria (1976) famously observed that people who never learnt to read and write do not perceive visual illusions. We conducted a conceptual replication of the Luria study of the effect of literacy on the processing of visual illusions. We designed two carefully controlled experiments with 161 participants with varying literacy levels ranging from complete illiterates to high literates in Chennai, India. Accuracy and reaction time in the identification of two types of visual shape and color illusions and the identification of appropriate control images were measured. Separate statistical analyses of Experiments 1 and 2 as well as pooled analyses of both experiments do not provide any support for the notion that literacy effects the perception of visual illusions. Our large sample, carefully controlled study strongly suggests that literacy does not meaningfully affect the identification of visual illusions and raises some questions about other reports about cultural effects on illusion perception.
  • Bartolozzi, F., Jongman, S. R., & Meyer, A. S. (2021). Concurrent speech planning does not eliminate repetition priming from spoken words: Evidence from linguistic dual-tasking. Journal of Experimental Psychology: Learning, Memory, and Cognition, 47(3), 466-480. doi:10.1037/xlm0000944.

    Abstract

    In conversation, production and comprehension processes may overlap, causing interference. In 3 experiments, we investigated whether repetition priming can work as a supporting device, reducing costs associated with linguistic dual-tasking. Experiment 1 established the rate of decay of repetition priming from spoken words to picture naming for primes embedded in sentences. Experiments 2 and 3 investigated whether the rate of decay was faster when participants comprehended the prime while planning to name unrelated pictures. In all experiments, the primed picture followed the sentences featuring the prime on the same trial, or 10 or 50 trials later. The results of the 3 experiments were strikingly similar: robust repetition priming was observed when the primed picture followed the prime sentence. Thus, repetition priming was observed even when the primes were processed while the participants prepared an unrelated spoken utterance. Priming might, therefore, support utterance planning in conversation, where speakers routinely listen while planning their utterances.

    Additional information

    supplemental material
  • Bosker, H. R. (2021). Using fuzzy string matching for automated assessment of listener transcripts in speech intelligibility studies. Behavior Research Methods, 53(5), 1945-1953. doi:10.3758/s13428-021-01542-4.

    Abstract

    Many studies of speech perception assess the intelligibility of spoken sentence stimuli by means
    of transcription tasks (‘type out what you hear’). The intelligibility of a given stimulus is then often
    expressed in terms of percentage of words correctly reported from the target sentence. Yet scoring
    the participants’ raw responses for words correctly identified from the target sentence is a time-
    consuming task, and hence resource-intensive. Moreover, there is no consensus among speech
    scientists about what specific protocol to use for the human scoring, limiting the reliability of
    human scores. The present paper evaluates various forms of fuzzy string matching between
    participants’ responses and target sentences, as automated metrics of listener transcript accuracy.
    We demonstrate that one particular metric, the Token Sort Ratio, is a consistent, highly efficient,
    and accurate metric for automated assessment of listener transcripts, as evidenced by high
    correlations with human-generated scores (best correlation: r = 0.940) and a strong relationship to
    acoustic markers of speech intelligibility. Thus, fuzzy string matching provides a practical tool for
    assessment of listener transcript accuracy in large-scale speech intelligibility studies. See
    https://tokensortratio.netlify.app for an online implementation.
  • Bosker, H. R., Badaya, E., & Corley, M. (2021). Discourse markers activate their, like, cohort competitors. Discourse Processes, 58(9), 837-851. doi:10.1080/0163853X.2021.1924000.

    Abstract

    Speech in everyday conversations is riddled with discourse markers (DMs), such as well, you know, and like. However, in many lab-based studies of speech comprehension, such DMs are typically absent from the carefully articulated and highly controlled speech stimuli. As such, little is known about how these DMs influence online word recognition. The present study specifically investigated the online processing of DM like and how it influences the activation of words in the mental lexicon. We specifically targeted the cohort competitor (CC) effect in the Visual World Paradigm: Upon hearing spoken instructions to “pick up the beaker,” human listeners also typically fixate—next to the target object—referents that overlap phonologically with the target word (cohort competitors such as beetle; CCs). However, several studies have argued that CC effects are constrained by syntactic, semantic, pragmatic, and discourse constraints. Therefore, the present study investigated whether DM like influences online word recognition by activating its cohort competitors (e.g., lightbulb). In an eye-tracking experiment using the Visual World Paradigm, we demonstrate that when participants heard spoken instructions such as “Now press the button for the, like … unicycle,” they showed anticipatory looks to the CC referent (lightbulb)well before hearing the target. This CC effect was sustained for a relatively long period of time, even despite hearing disambiguating information (i.e., the /k/ in like). Analysis of the reaction times also showed that participants were significantly faster to select CC targets (lightbulb) when preceded by DM like. These findings suggest that seemingly trivial DMs, such as like, activate their CCs, impacting online word recognition. Thus, we advocate a more holistic perspective on spoken language comprehension in naturalistic communication, including the processing of DMs.
  • Bosker, H. R., & Peeters, D. (2021). Beat gestures influence which speech sounds you hear. Proceedings of the Royal Society B: Biological Sciences, 288: 20202419. doi:10.1098/rspb.2020.2419.

    Abstract

    Beat gestures—spontaneously produced biphasic movements of the hand—
    are among the most frequently encountered co-speech gestures in human
    communication. They are closely temporally aligned to the prosodic charac-
    teristics of the speech signal, typically occurring on lexically stressed
    syllables. Despite their prevalence across speakers of the world’s languages,
    how beat gestures impact spoken word recognition is unclear. Can these
    simple ‘flicks of the hand’ influence speech perception? Across a range
    of experiments, we demonstrate that beat gestures influence the explicit
    and implicit perception of lexical stress (e.g. distinguishing OBject from
    obJECT), and in turn can influence what vowels listeners hear. Thus, we pro-
    vide converging evidence for a manual McGurk effect: relatively simple and
    widely occurring hand movements influence which speech sounds we hear

    Additional information

    example stimuli and experimental data
  • Bosker, H. R. (2021). The contribution of amplitude modulations in speech to perceived charisma. In B. Weiss, J. Trouvain, M. Barkat-Defradas, & J. J. Ohala (Eds.), Voice attractiveness: Prosody, phonology and phonetics (pp. 165-181). Singapore: Springer. doi:10.1007/978-981-15-6627-1_10.

    Abstract

    Speech contains pronounced amplitude modulations in the 1–9 Hz range, correlating with the syllabic rate of speech. Recent models of speech perception propose that this rhythmic nature of speech is central to speech recognition and has beneficial effects on language processing. Here, we investigated the contribution of amplitude modulations to the subjective impression listeners have of public speakers. The speech from US presidential candidates Hillary Clinton and Donald Trump in the three TV debates of 2016 was acoustically analyzed by means of modulation spectra. These indicated that Clinton’s speech had more pronounced amplitude modulations than Trump’s speech, particularly in the 1–9 Hz range. A subsequent perception experiment, with listeners rating the perceived charisma of (low-pass filtered versions of) Clinton’s and Trump’s speech, showed that more pronounced amplitude modulations (i.e., more ‘rhythmic’ speech) increased perceived charisma ratings. These outcomes highlight the important contribution of speech rhythm to charisma perception.
  • Brehm, L., & Meyer, A. S. (2021). Planning when to say: Dissociating cue use in utterance initiation using cross-validation. Journal of Experimental Psychology: General, 150(9), 1772-1799. doi:10.1037/xge0001012.

    Abstract

    In conversation, turns follow each other with minimal gaps. To achieve this, speakers must launch their utterances shortly before the predicted end of the partner’s turn. We examined the relative importance of cues to partner utterance content and partner utterance length for launching coordinated speech. In three experiments, Dutch adult participants had to produce prepared utterances (e.g., vier, “four”) immediately after a recording of a confederate’s utterance (zeven, “seven”). To assess the role of corepresenting content versus attending to speech cues in launching coordinated utterances, we varied whether the participant could see the stimulus being named by the confederate, the confederate prompt’s length, and whether within a block of trials, the confederate prompt’s length was predictable. We measured how these factors affected the gap between turns and the participants’ allocation of visual attention while preparing to speak. Using a machine-learning technique, model selection by k-fold cross-validation, we found that gaps were most strongly predicted by cues from the confederate speech signal, though some benefit was also conferred by seeing the confederate’s stimulus. This shows that, at least in a simple laboratory task, speakers rely more on cues in the partner’s speech than corepresentation of their utterance content.
  • Brehm, L., Jackson, C. N., & Miller, K. L. (2021). Probabilistic online processing of sentence anomalies. Language, Cognition and Neuroscience, 36(8), 959-983. doi:10.1080/23273798.2021.1900579.

    Abstract

    Listeners can successfully interpret the intended meaning of an utterance even when it contains errors or other unexpected anomalies. The present work combines an online measure of attention to sentence referents (visual world eye-tracking) with offline judgments of sentence meaning to disclose how the interpretation of anomalous sentences unfolds over time in order to explore mechanisms of non-literal processing. We use a metalinguistic judgment in Experiment 1 and an elicited imitation task in Experiment 2. In both experiments, we focus on one morphosyntactic anomaly (Subject-verb agreement; The key to the cabinets literally *were … ) and one semantic anomaly (Without; Lulu went to the gym without her hat ?off) and show that non-literal referents to each are considered upon hearing the anomalous region of the sentence. This shows that listeners understand anomalies by overwriting or adding to an initial interpretation and that this occurs incrementally and adaptively as the sentence unfolds.
  • Creemers, A., & Embick, D. (2021). Retrieving stem meanings in opaque words during auditory lexical processing. Language, Cognition and Neuroscience, 36(9), 1107-1122. doi:10.1080/23273798.2021.1909085.

    Abstract

    Recent constituent priming experiments show that Dutch and German prefixed verbs prime their stem, regardless of semantic transparency (e.g. Smolka et al. [(2014). ‘Verstehen’ (‘understand’) primes ‘stehen’ (‘stand’): Morphological structure overrides semantic compositionality in the lexical representation of German complex verbs. Journal of Memory and Language, 72, 16–36. https://doi.org/10.1016/j.jml.2013.12.002]). We examine whether the processing of opaque verbs (e.g. herhalen “repeat”) involves the retrieval of only the whole-word meaning, or whether the lexical-semantic meaning of the stem (halen as “take/get”) is retrieved as well. We report the results of an auditory semantic priming experiment with Dutch prefixed verbs, testing whether the recognition of a semantic associate to the stem (BRENGEN “bring”) is facilitated by the presentation of an opaque prefixed verb. In contrast to prior visual studies, significant facilitation after semantically opaque primes is found, which suggests that the lexical-semantic meaning of stems in opaque words is retrieved. We examine the implications that these findings have for auditory word recognition, and for the way in which different types of meanings are represented and processed.

    Additional information

    supplemental material
  • Decuyper, C., Brysbaert, M., Brodeur, M. B., & Meyer, A. S. (2021). Bank of Standardized Stimuli (BOSS): Dutch names for 1400 photographs. Journal of Cognition, 4(1): 33. doi:10.5334/joc.180.

    Abstract

    We present written naming norms from 153 young adult Dutch speakers for 1397 photographs (the BOSS set; see Brodeur, Dionne-Dostie, Montreuil, & Lepage, 2010; Brodeur, Guérard, & Bouras, 2014). From the norming study, we report the preferred (modal) name, alternative names, name agreement, and average object agreement. In addition, the data base includes Zipf frequency, word prevalence and Age of Acquisition for the modal picture names collected. Furthermore, we describe a subset of 359 photographs with very good name agreement and a subset of 35 photos with two common names. These sets may be particularly valuable for designing experiments. Though the participants typed the object names, comparisons with other datasets indicate that the collected norms are valuable for spoken naming studies as well.
  • Eviatar, Z., & Huettig, F. (Eds.). (2021). Literacy and writing systems [Special Issue]. Journal of Cultural Cognitive Science.
  • Eviatar, Z., & Huettig, F. (2021). The literate mind. Journal of Cultural Cognitive Science, 5, 81-84. doi:10.1007/s41809-021-00086-5.
  • Favier, S., & Huettig, F. (2021). Are there core and peripheral syntactic structures? Experimental evidence from Dutch native speakers with varying literacy levels. Lingua, 251: 102991. doi:10.1016/j.lingua.2020.102991.

    Abstract

    Some theorists posit the existence of a ‘core’ grammar that virtually all native speakers acquire, and a ‘peripheral’ grammar that many do not. We investigated the viability of such a categorical distinction in the Dutch language. We first consulted linguists’ intuitions as to the ‘core’ or ‘peripheral’ status of a wide range of grammatical structures. We then tested a selection of core- and peripheral-rated structures on naïve participants with varying levels of literacy experience, using grammaticality judgment as a proxy for receptive knowledge. Overall, participants demonstrated better knowledge of ‘core’ structures than ‘peripheral’ structures, but the considerable variability within these categories was strongly suggestive of a continuum rather than a categorical distinction between them. We also hypothesised that individual differences in the knowledge of core and peripheral structures would reflect participants’ literacy experience. This was supported only by a small trend in our data. The results fit best with the notion that more frequent syntactic structures are mastered by more people than infrequent ones and challenge the received sense of a categorical core-periphery distinction.
  • Favier, S., Meyer, A. S., & Huettig, F. (2021). Literacy can enhance syntactic prediction in spoken language processing. Journal of Experimental Psychology: General, 150(10), 2167-2174. doi:10.1037/xge0001042.

    Abstract

    Language comprehenders can use syntactic cues to generate predictions online about upcoming language. Previous research with reading-impaired adults and healthy, low-proficiency adult and child learners suggests that reading skills are related to prediction in spoken language comprehension. Here we investigated whether differences in literacy are also related to predictive spoken language processing in non-reading-impaired proficient adult readers with varying levels of literacy experience. Using the visual world paradigm enabled us to measure prediction based on syntactic cues in the spoken sentence, prior to the (predicted) target word. Literacy experience was found to be the strongest predictor of target anticipation, independent of general cognitive abilities. These findings suggest that a) experience with written language can enhance syntactic prediction of spoken language in normal adult language users, and b) processing skills can be transferred to related tasks (from reading to listening) if the domains involve similar processes (e.g., predictive dependencies) and representations (e.g., syntactic).

    Additional information

    Online supplementary material
  • Favier, S., & Huettig, F. (2021). Long-term written language experience affects grammaticality judgments and usage but not priming of spoken sentences. Quarterly Journal of Experimental Psychology, 74(8), 1378-1395. doi:10.1177/17470218211005228.

    Abstract

    ‘Book language’ offers a richer linguistic experience than typical conversational speech in terms of its syntactic properties. Here, we investigated the role of long-term syntactic experience on syntactic knowledge and processing. In a pre-registered study with 161 adult native Dutch speakers with varying levels of literacy, we assessed the contribution of individual differences in written language experience to offline and online syntactic processes. Offline syntactic knowledge was assessed as accuracy in an auditory grammaticality judgment task in which we tested violations of four Dutch grammatical norms. Online syntactic processing was indexed by syntactic priming of the Dutch dative alternation, using a comprehension-to-production priming paradigm with auditory presentation. Controlling for the contribution of non-verbal IQ, verbal working memory, and processing speed, we observed a robust effect of literacy experience on the detection of grammatical norm violations in spoken sentences, suggesting that exposure to the syntactic complexity and diversity of written language has specific benefits for general (modality-independent) syntactic knowledge. We replicated previous results by finding robust comprehension-to-production structural priming, both with and without lexical overlap between prime and target. Although literacy experience affected the usage of syntactic alternates in our large sample, it did not modulate their priming. We conclude that amount of experience with written language increases explicit awareness of grammatical norm violations and changes the usage of (PO vs. DO) dative spoken sentences but has no detectable effect on their implicit syntactic priming in proficient language users. These findings constrain theories about the effect of long-term experience on syntactic processing.
  • Fernandes, T., Arunkumar, M., & Huettig, F. (2021). The role of the written script in shaping mirror-image discrimination: Evidence from illiterate, Tamil literate, and Tamil-Latin-alphabet bi-literate adults. Cognition, 206: 104493. doi:10.1016/j.cognition.2020.104493.

    Abstract

    Learning a script with mirrored graphs (e.g., d ≠ b) requires overcoming the evolutionary-old perceptual tendency to process mirror images as equivalent. Thus, breaking mirror invariance offers an important tool for understanding cultural re-shaping of evolutionarily ancient cognitive mechanisms. Here we investigated the role of script (i.e., presence vs. absence of mirrored graphs: Latin alphabet vs. Tamil) by revisiting mirror-image processing by illiterate, Tamil monoliterate, and Tamil-Latin-alphabet bi-literate adults. Participants performed two same-different tasks (one orientation-based, another shape-based) on Latin-alphabet letters. Tamil monoliterate were significantly better than illiterate and showed good explicit mirror-image discrimination. However, only bi-literate adults fully broke mirror invariance: slower shape-based judgments for mirrored than identical pairs and reduced disadvantage in orientation-based over shape-based judgments of mirrored pairs. These findings suggest learning a script with mirrored graphs is the strongest force for breaking mirror invariance.

    Additional information

    supplementary material
  • Fisher, N., Hadley, L., Corps, R. E., & Pickering, M. (2021). The effects of dual-task interference in predicting turn-ends in speech and music. Brain Research, 1768: 147571. doi:10.1016/j.brainres.2021.147571.

    Abstract

    Determining when a partner’s spoken or musical turn will end requires well-honed predictive abilities. Evidence suggests that our motor systems are activated during perception of both speech and music, and it has been argued that motor simulation is used to predict turn-ends across domains. Here we used a dual-task interference paradigm to investigate whether motor simulation of our partner’s action underlies our ability to make accurate turn-end predictions in speech and in music. Furthermore, we explored how specific this simulation is to the action being predicted. We conducted two experiments, one investigating speech turn-ends, and one investigating music turn-ends. In each, 34 proficient pianists predicted turn-endings while (1) passively listening, (2) producing an effector-specific motor activity (mouth/hand movement), or (3) producing a task- and effector-specific motor activity (mouthing words/fingering a piano melody). In the speech experiment, any movement during speech perception disrupted predictions of spoken turn-ends, whether the movement was task-specific or not. In the music experiment, only task-specific movement (i.e., fingering a piano melody) disrupted predictions of musical turn-ends. These findings support the use of motor simulation to make turn-end predictions in both speech and music but suggest that the specificity of this simulation may differ between domains.
  • Healthy Brain Study Consortium, Aarts, E., Akkerman, A., Altgassen, M., Bartels, R., Beckers, D., Bevelander, K., Bijleveld, E., Blaney Davidson, E., Boleij, A., Bralten, J., Cillessen, T., Claassen, J., Cools, R., Cornelissen, I., Dresler, M., Eijsvogels, T., Faber, M., Fernández, G., Figner, B., Fritsche, M. and 67 moreHealthy Brain Study Consortium, Aarts, E., Akkerman, A., Altgassen, M., Bartels, R., Beckers, D., Bevelander, K., Bijleveld, E., Blaney Davidson, E., Boleij, A., Bralten, J., Cillessen, T., Claassen, J., Cools, R., Cornelissen, I., Dresler, M., Eijsvogels, T., Faber, M., Fernández, G., Figner, B., Fritsche, M., Füllbrunn, S., Gayet, S., Van Gelder, M. M. H. J., Van Gerven, M., Geurts, S., Greven, C. U., Groefsema, M., Haak, K., Hagoort, P., Hartman, Y., Van der Heijden, B., Hermans, E., Heuvelmans, V., Hintz, F., Den Hollander, J., Hulsman, A. M., Idesis, S., Jaeger, M., Janse, E., Janzing, J., Kessels, R. P. C., Karremans, J. C., De Kleijn, W., Klein, M., Klumpers, F., Kohn, N., Korzilius, H., Krahmer, B., De Lange, F., Van Leeuwen, J., Liu, H., Luijten, M., Manders, P., Manevska, K., Marques, J. P., Matthews, J., McQueen, J. M., Medendorp, P., Melis, R., Meyer, A. S., Oosterman, J., Overbeek, L., Peelen, M., Popma, J., Postma, G., Roelofs, K., Van Rossenberg, Y. G. T., Schaap, G., Scheepers, P., Selen, L., Starren, M., Swinkels, D. W., Tendolkar, I., Thijssen, D., Timmerman, H., Tutunji, R., Tuladhar, A., Veling, H., Verhagen, M., Verkroost, J., Vink, J., Vriezekolk, V., Vrijsen, J., Vyrastekova, J., Van der Wal, S., Willems, R. M., & Willemsen, A. (2021). Protocol of the Healthy Brain Study: An accessible resource for understanding the human brain and how it dynamically and individually operates in its bio-social context. PLoS One, 16(12): e0260952. doi:10.1371/journal.pone.0260952.

    Abstract

    The endeavor to understand the human brain has seen more progress in the last few decades than in the previous two millennia. Still, our understanding of how the human brain relates to behavior in the real world and how this link is modulated by biological, social, and environmental factors is limited. To address this, we designed the Healthy Brain Study (HBS), an interdisciplinary, longitudinal, cohort study based on multidimensional, dynamic assessments in both the laboratory and the real world. Here, we describe the rationale and design of the currently ongoing HBS. The HBS is examining a population-based sample of 1,000 healthy participants (age 30-39) who are thoroughly studied across an entire year. Data are collected through cognitive, affective, behavioral, and physiological testing, neuroimaging, bio-sampling, questionnaires, ecological momentary assessment, and real-world assessments using wearable devices. These data will become an accessible resource for the scientific community enabling the next step in understanding the human brain and how it dynamically and individually operates in its bio-social context. An access procedure to the collected data and bio-samples is in place and published on https://www.healthybrainstudy.nl/en/data-and-methods.

    https://www.trialregister.nl/trial/7955

    Additional information

    supplementary material
  • Hintz, F., Voeten, C. C., McQueen, J. M., & Scharenborg, O. (2021). The effects of onset and offset masking on the time course of non-native spoken-word recognition in noise. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 133-139). Vienna: Cognitive Science Society.

    Abstract

    Using the visual-word paradigm, the present study investigated the effects of word onset and offset masking on the time course of non-native spoken-word recognition in the presence of background noise. In two experiments, Dutch non-native listeners heard English target words, preceded by carrier sentences that were noise-free (Experiment 1) or contained intermittent noise (Experiment 2). Target words were either onset- or offset-masked or not masked at all. Results showed that onset masking delayed target word recognition more than offset masking did, suggesting that – similar to natives – non-native listeners strongly rely on word onset information during word recognition in noise.

    Additional information

    Link to Preprint on BioRxiv
  • Holler, J., Alday, P. M., Decuyper, C., Geiger, M., Kendrick, K. H., & Meyer, A. S. (2021). Competition reduces response times in multiparty conversation. Frontiers in Psychology, 12: 693124. doi:10.3389/fpsyg.2021.693124.

    Abstract

    Natural conversations are characterized by short transition times between turns. This holds in particular for multi-party conversations. The short turn transitions in everyday conversations contrast sharply with the much longer speech onset latencies observed in laboratory studies where speakers respond to spoken utterances. There are many factors that facilitate speech production in conversational compared to laboratory settings. Here we highlight one of them, the impact of competition for turns. In multi-party conversations, speakers often compete for turns. In quantitative corpus analyses of multi-party conversation, the fastest response determines the recorded turn transition time. In contrast, in dyadic conversations such competition for turns is much less likely to arise, and in laboratory experiments with individual participants it does not arise at all. Therefore, all responses tend to be recorded. Thus, competition for turns may reduce the recorded mean turn transition times in multi-party conversations for a simple statistical reason: slow responses are not included in the means. We report two studies illustrating this point. We first report the results of simulations showing how much the response times in a laboratory experiment would be reduced if, for each trial, instead of recording all responses, only the fastest responses of several participants responding independently on the trial were recorded. We then present results from a quantitative corpus analysis comparing turn transition times in dyadic and triadic conversations. There was no significant group size effect in question-response transition times, where the present speaker often selects the next one, thus reducing competition between speakers. But, as predicted, triads showed shorter turn transition times than dyads for the remaining turn transitions, where competition for the floor was more likely to arise. Together, these data show that turn transition times in conversation should be interpreted in the context of group size, turn transition type, and social setting.
  • Hustá, C., Zheng, X., Papoutsi, C., & Piai, V. (2021). Electrophysiological signatures of conceptual and lexical retrieval from semantic memory. Neuropsychologia, 161: 107988. doi:10.1016/j.neuropsychologia.2021.107988.

    Abstract

    Retrieval from semantic memory of conceptual and lexical information is essential for producing speech. It is unclear whether there are differences in the neural mechanisms of conceptual and lexical retrieval when spreading activation through semantic memory is initiated by verbal or nonverbal settings. The same twenty participants took part in two EEG experiments. The first experiment examined conceptual and lexical retrieval following nonverbal settings, whereas the second experiment was a replication of previous studies examining conceptual and lexical retrieval following verbal settings. Target pictures were presented after constraining and nonconstraining contexts. In the nonverbal settings, contexts were provided as two priming pictures (e.g., constraining: nest, feather; nonconstraining: anchor, lipstick; target picture: BIRD). In the verbal settings, contexts were provided as sentences (e.g., constraining: “The farmer milked a...”; nonconstraining: “The child drew a...”; target picture: COW). Target pictures were named faster following constraining contexts in both experiments, indicating that conceptual preparation starts before target picture onset in constraining conditions. In the verbal experiment, we replicated the alpha-beta power decreases in constraining relative to nonconstraining conditions before target picture onset. No such power decreases were found in the nonverbal experiment. Power decreases in constraining relative to nonconstraining conditions were significantly different between experiments. Our findings suggest that participants engage in conceptual preparation following verbal and nonverbal settings, albeit differently. The retrieval of a target word, initiated by verbal settings, is associated with alpha-beta power decreases. By contrast, broad conceptual preparation alone, prompted by nonverbal settings, does not seem enough to elicit alpha-beta power decreases. These findings have implications for theories of oscillations and semantic memory.

    Additional information

    1-s2.0-S0028393221002414-mmc1.pdf
  • Janse, E., & Andringa, S. J. (2021). The roles of cognitive abilities and hearing acuity in older adults’ recognition of words taken from fast and spectrally reduced speech. Applied Psycholinguistics, 42(3), 763-790. doi:10.1017/S0142716421000047.

    Abstract

    Previous literature has identified several cognitive abilities as predictors of individual differences in speech perception. Working memory was chief among them, but effects have also been found for processing speed. Most research has been conducted on speech in noise, but fast and unclear articulation also makes listening challenging, particularly for older listeners. As a first step toward specifying the cognitive mechanisms underlying spoken word recognition, we set up this study to determine which factors explain unique variation in word identification accuracy in fast speech, and the extent to which this was affected by further degradation of the speech signal. To that end, 105 older adults were tested on identification accuracy of fast words in unaltered and degraded conditions in which the speech stimuli were low-pass filtered. They were also tested on processing speed, memory, vocabulary knowledge, and hearing sensitivity. A structural equation analysis showed that only memory and hearing sensitivity explained unique variance in word recognition in both listening conditions. Working memory was more strongly associated with performance in the unfiltered than in the filtered condition. These results suggest that memory skills, rather than speed, facilitate the mapping of single words onto stored lexical representations, particularly in conditions of medium difficulty.
  • Jongman, S. R., Khoe, Y. H., & Hintz, F. (2021). Vocabulary size influences spontaneous speech in native language users: Validating the use of automatic speech recognition in individual differences research. Language and Speech, 64(1), 35-51. doi:10.1177/0023830920911079.

    Abstract

    Previous research has shown that vocabulary size affects performance on laboratory word production tasks. Individuals who know many words show faster lexical access and retrieve more words belonging to pre-specified categories than individuals who know fewer words. The present study examined the relationship between receptive vocabulary size and speaking skills as assessed in a natural sentence production task. We asked whether measures derived from spontaneous responses to every-day questions correlate with the size of participants’ vocabulary. Moreover, we assessed the suitability of automatic speech recognition for the analysis of participants’ responses in complex language production data. We found that vocabulary size predicted indices of spontaneous speech: Individuals with a larger vocabulary produced more words and had a higher speech-silence ratio compared to individuals with a smaller vocabulary. Importantly, these relationships were reliably identified using manual and automated transcription methods. Taken together, our results suggest that spontaneous speech elicitation is a useful method to investigate natural language production and that automatic speech recognition can alleviate the burden of labor-intensive speech transcription.
  • Kapteijns, B., & Hintz, F. (2021). Comparing predictors of sentence self-paced reading times: Syntactic complexity versus transitional probability metrics. PLoS One, 16(7): e0254546. doi:10.1371/journal.pone.0254546.

    Abstract

    When estimating the influence of sentence complexity on reading, researchers typically opt for one of two main approaches: Measuring syntactic complexity (SC) or transitional probability (TP). Comparisons of the predictive power of both approaches have yielded mixed results. To address this inconsistency, we conducted a self-paced reading experiment. Participants read sentences of varying syntactic complexity. From two alternatives, we selected the set of SC and TP measures, respectively, that provided the best fit to the self-paced reading data. We then compared the contributions of the SC and TP measures to reading times when entered into the same model. Our results showed that both measures explained significant portions of variance in self-paced reading times. Thus, researchers aiming to measure sentence complexity should take both SC and TP into account. All of the analyses were conducted with and without control variables known to influence reading times (word/sentence length, word frequency and word position) to showcase how the effects of SC and TP change in the presence of the control variables.

    Additional information

    supporting information
  • Karaca, F., Brouwer, S., Unsworth, S., & Huettig, F. (2021). Prediction in bilingual children: The missing piece of the puzzle. In E. Kaan, & T. Grüter (Eds.), Prediction in Second Language Processing and Learning (pp. 116-137). Amsterdam: Benjamins.

    Abstract

    A wealth of studies has shown that more proficient monolingual speakers are better at predicting upcoming information during language comprehension. Similarly, prediction skills of adult second language (L2) speakers in their L2 have also been argued to be modulated by their L2 proficiency. How exactly language proficiency and prediction are linked, however, is yet to be systematically investigated. One group of language users which has the potential to provide invaluable insights into this link is bilingual children. In this paper, we compare bilingual children’s prediction skills with those of monolingual children and adult L2 speakers, and show how investigating bilingual children’s prediction skills may contribute to our understanding of how predictive processing works.
  • Kaufeld, G. (2021). Investigating spoken language comprehension as perceptual inference. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • He, J., Meyer, A. S., Creemers, A., & Brehm, L. (2021). Conducting language production research online: A web-based study of semantic context and name agreement effects in multi-word production. Collabra: Psychology, 7(1): 29935. doi:10.1525/collabra.29935.

    Abstract

    Few web-based experiments have explored spoken language production, perhaps due to concerns of data quality, especially for measuring onset latencies. The present study highlights how speech production research can be done outside of the laboratory by measuring utterance durations and speech fluency in a multiple-object naming task when examining two effects related to lexical selection: semantic context and name agreement. A web-based modified blocked-cyclic naming paradigm was created, in which participants named a total of sixteen simultaneously presented pictures on each trial. The pictures were either four tokens from the same semantic category (homogeneous context), or four tokens from different semantic categories (heterogeneous context). Name agreement of the pictures was varied orthogonally (high, low). In addition to onset latency, five dependent variables were measured to index naming performance: accuracy, utterance duration, total pause time, the number of chunks (word groups pronounced without intervening pauses), and first chunk length. Bayesian analyses showed effects of semantic context and name agreement for some of the dependent measures, but no interaction. We discuss the methodological implications of the current study and make best practice recommendations for spoken language production research in an online environment.
  • He, J., Meyer, A. S., & Brehm, L. (2021). Concurrent listening affects speech planning and fluency: The roles of representational similarity and capacity limitation. Language, Cognition and Neuroscience, 36(10), 1258-1280. doi:10.1080/23273798.2021.1925130.

    Abstract

    In a novel continuous speaking-listening paradigm, we explored how speech planning was affected by concurrent listening. In Experiment 1, Dutch speakers named pictures with high versus low name agreement while ignoring Dutch speech, Chinese speech, or eight-talker babble. Both name agreement and type of auditory input influenced response timing and chunking, suggesting that representational similarity impacts lexical selection and the scope of advance planning in utterance generation. In Experiment 2, Dutch speakers named pictures with high or low name agreement while either ignoring Dutch words, or attending to them for a later memory test. Both name agreement and attention demand influenced response timing and chunking, suggesting that attention demand impacts lexical selection and the planned utterance units in each response. The study indicates that representational similarity and attention demand play important roles in linguistic dual-task interference, and the interference can be managed by adapting when and how to plan speech.

    Additional information

    supplemental material
  • Onnis, L., & Huettig, F. (2021). Can prediction and retrodiction explain whether frequent multi-word phrases are accessed ’precompiled’ from memory or compositionally constructed on the fly? Brain Research, 1772: 147674. doi:10.1016/j.brainres.2021.147674.

    Abstract

    An important debate on the architecture of the language faculty has been the extent to which it relies on a compositional system that constructs larger units from morphemes to words to phrases to utterances on the fly and in real time using grammatical rules; or a system that chunks large preassembled, stored units of language from memory; or some combination of both approaches. Good empirical evidence exists for both ’computed’ and ’large stored’ forms in language, but little is known about what shapes multi-word storage / access or compositional processing. Here we explored whether predictive and retrodictive processes are a likely determinant of multi-word storage / processing. Our results suggest that forward and backward predictability are independently informative in determining the lexical cohesiveness of multi-word phrases. In addition, our results call for a reevaluation of the role of retrodiction in contemporary language processing accounts (cf. Ferreira and Chantavarin 2018).
  • Ota, M., San Jose, A., & Smith, K. (2021). The emergence of word-internal repetition through iterated learning: Explaining the mismatch between learning biases and language design. Cognition, 210: 104585. doi:10.1016/j.cognition.2021.104585.

    Abstract

    The idea that natural language is shaped by biases in learning plays a key role in our understanding of how human language is structured, but its corollary that there should be a correspondence between typological generalisations and ease of acquisition is not always supported. For example, natural languages tend to avoid close repetitions of consonants within a word, but developmental evidence suggests that, if anything, words containing sound repetitions are more, not less, likely to be acquired than those without. In this study, we use word-internal repetition as a test case to provide a cultural evolutionary explanation of when and how learning biases impact on language design. Two artificial language experiments showed that adult speakers possess a bias for both consonant and vowel repetitions when learning novel words, but the effects of this bias were observable in language transmission only when there was a relatively high learning pressure on the lexicon. Based on these results, we argue that whether the design of a language reflects biases in learning depends on the relative strength of pressures from learnability and communication efficiency exerted on the linguistic system during cultural transmission.

    Additional information

    supplementary data data
  • Hu, Y., Lv, Q., Pascual, E., Liang, J., & Huettig, F. (2021). Syntactic priming in illiterate and literate older Chinese adults. Journal of Cultural Cognitive Science, 5, 267-286. doi:10.1007/s41809-021-00082-9.

    Abstract

    Does life-long literacy experience modulate syntactic priming in spoken language processing? Such a postulated influence is compatible with usage-based theories of language processing that propose that all linguistic skills are a function of accumulated experience with language across life. Here we investigated the effect of literacy experience on syntactic priming in Mandarin in sixty Chinese older adults from Hebei province. Thirty participants were completely illiterate and thirty were literate Mandarin speakers of similar age and socioeconomic background. We first observed usage differences: literates produced robustly more prepositional object (PO) constructions than illiterates. This replicates, with a different sample, language, and cultural background, previous findings that literacy experience affects (baseline) usage of PO and DO transitive alternates. We also observed robust syntactic priming for double-object (DO), but not prepositional-object (PO) dative alternations for both groups. The magnitude of this DO priming however was higher in literates than in illiterates. We also observed that cumulative adaptation in syntactic priming differed as a function of literacy. Cumulative syntactic priming in literates appears to be related mostly to comprehending others, whereas in illiterates it is also associated with repeating self-productions. Further research is needed to confirm this interpretation.
  • Raviv, L., De Heer Kloots, M., & Meyer, A. S. (2021). What makes a language easy to learn? A preregistered study on how systematic structure and community size affect language learnability. Cognition, 210: 104620. doi:10.1016/j.cognition.2021.104620.

    Abstract

    Cross-linguistic differences in morphological complexity could have important consequences for language learning. Specifically, it is often assumed that languages with more regular, compositional, and transparent grammars are easier to learn by both children and adults. Moreover, it has been shown that such grammars are more likely to evolve in bigger communities. Together, this suggests that some languages are acquired faster than others, and that this advantage can be traced back to community size and to the degree of systematicity in the language. However, the causal relationship between systematic linguistic structure and language learnability has not been formally tested, despite its potential importance for theories on language evolution, second language learning, and the origin of linguistic diversity. In this pre-registered study, we experimentally tested the effects of community size and systematic structure on adult language learning. We compared the acquisition of different yet comparable artificial languages that were created by big or small groups in a previous communication experiment, which varied in their degree of systematic linguistic structure. We asked (a) whether more structured languages were easier to learn; and (b) whether languages created by the bigger groups were easier to learn. We found that highly systematic languages were learned faster and more accurately by adults, but that the relationship between language learnability and linguistic structure was typically non-linear: high systematicity was advantageous for learning, but learners did not benefit from partly or semi-structured languages. Community size did not affect learnability: languages that evolved in big and small groups were equally learnable, and there was no additional advantage for languages created by bigger groups beyond their degree of systematic structure. Furthermore, our results suggested that predictability is an important advantage of systematic structure: participants who learned more structured languages were better at generalizing these languages to new, unfamiliar meanings, and different participants who learned the same more structured languages were more likely to produce similar labels. That is, systematic structure may allow speakers to converge effortlessly, such that strangers can immediately understand each other.
  • Reifegerste, J., Meyer, A. S., Zwitserlood, P., & Ullman, M. T. (2021). Aging affects steaks more than knives: Evidence that the processing of words related to motor skills is relatively spared in aging. Brain and Language, 218: 104941. doi:10.1016/j.bandl.2021.104941.

    Abstract

    Lexical-processing declines are a hallmark of aging. However, the extent of these declines may vary as a function of different factors. Motivated by findings from neurodegenerative diseases and healthy aging, we tested whether ‘motor-relatedness’ (the degree to which words are associated with particular human body movements) might moderate such declines. We investigated this question by examining data from three experiments. The experiments were carried out in different languages (Dutch, German, English) using different tasks (lexical decision, picture naming), and probed verbs and nouns, in all cases controlling for potentially confounding variables (e.g., frequency, age-of-acquisition, imageability). Whereas ‘non-motor words’ (e.g., steak) showed age-related performance decreases in all three experiments, ‘motor words’ (e.g., knife) yielded either smaller decreases (in one experiment) or no decreases (in two experiments). The findings suggest that motor-relatedness can attenuate or even prevent age-related lexical declines, perhaps due to the relative sparing of neural circuitry underlying such words.

    Additional information

    supplementary material
  • Rodd, J., Decuyper, C., Bosker, H. R., & Ten Bosch, L. (2021). A tool for efficient and accurate segmentation of speech data: Announcing POnSS. Behavior Research Methods, 53, 744-756. doi:10.3758/s13428-020-01449-6.

    Abstract

    Despite advances in automatic speech recognition (ASR), human input is still essential to produce research-grade segmentations of speech data. Con- ventional approaches to manual segmentation are very labour-intensive. We introduce POnSS, a browser-based system that is specialized for the task of segmenting the onsets and offsets of words, that combines aspects of ASR with limited human input. In developing POnSS, we identified several sub- tasks of segmentation, and implemented each of these as separate interfaces for the annotators to interact with, to streamline their task as much as possible. We evaluated segmentations made with POnSS against a base- line of segmentations of the same data made conventionally in Praat. We observed that POnSS achieved comparable reliability to segmentation us- ing Praat, but required 23% less annotator time investment. Because of its greater efficiency without sacrificing reliability, POnSS represents a distinct methodological advance for the segmentation of speech data.
  • San Jose, A., Roelofs, A., & Meyer, A. S. (2021). Modeling the distributional dynamics of attention and semantic interference in word production. Cognition, 211: 104636. doi:10.1016/j.cognition.2021.104636.

    Abstract

    In recent years, it has become clear that attention plays an important role in spoken word production. Some of this evidence comes from distributional analyses of reaction time (RT) in regular picture naming and picture-word interference. Yet we lack a mechanistic account of how the properties of RT distributions come to reflect attentional processes and how these processes may in turn modulate the amount of conflict between lexical representations. Here, we present a computational account according to which attentional lapses allow for existing conflict to build up unsupervised on a subset of trials, thus modulating the shape of the resulting RT distribution. Our process model resolves discrepancies between outcomes of previous studies on semantic interference. Moreover, the model's predictions were confirmed in a new experiment where participants' motivation to remain attentive determined the size and distributional locus of semantic interference in picture naming. We conclude that process modeling of RT distributions importantly improves our understanding of the interplay between attention and conflict in word production. Our model thus provides a framework for interpreting distributional analyses of RT data in picture naming tasks.
  • Severijnen, G. G. A., Bosker, H. R., Piai, V., & McQueen, J. M. (2021). Listeners track talker-specific prosody to deal with talker-variability. Brain Research, 1769: 147605. doi:10.1016/j.brainres.2021.147605.

    Abstract

    One of the challenges in speech perception is that listeners must deal with considerable
    segmental and suprasegmental variability in the acoustic signal due to differences between talkers. Most previous studies have focused on how listeners deal with segmental variability.
    In this EEG experiment, we investigated whether listeners track talker-specific usage of suprasegmental cues to lexical stress to recognize spoken words correctly. In a three-day training phase, Dutch participants learned to map non-word minimal stress pairs onto different object referents (e.g., USklot meant “lamp”; usKLOT meant “train”). These non-words were
    produced by two male talkers. Critically, each talker used only one suprasegmental cue to signal stress (e.g., Talker A used only F0 and Talker B only intensity). We expected participants to learn which talker used which cue to signal stress. In the test phase, participants indicated whether spoken sentences including these non-words were correct (“The word for lamp is…”).
    We found that participants were slower to indicate that a stimulus was correct if the non-word was produced with the unexpected cue (e.g., Talker A using intensity). That is, if in training Talker A used F0 to signal stress, participants experienced a mismatch between predicted and perceived phonological word-forms if, at test, Talker A unexpectedly used intensity to cue
    stress. In contrast, the N200 amplitude, an event-related potential related to phonological
    prediction, was not modulated by the cue mismatch. Theoretical implications of these
    contrasting results are discussed. The behavioral findings illustrate talker-specific prediction of prosodic cues, picked up through perceptual learning during training.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2021). The effect of orthographic systems on the developing reading system: Typological and computational analyses. Psychological Review, 128(1), 125-159. doi:10.1037/rev0000257.

    Abstract

    Orthographic systems vary dramatically in the extent to which they encode a language’s phonological and lexico-semantic structure. Studies of the effects of orthographic transparency suggest that such variation is likely to have major implications for how the reading system operates. However, such studies have been unable to examine in isolation the contributory effect of transparency on reading due to co-varying linguistic or socio-cultural factors. We first investigated the phonological properties of languages using the range of the world’s orthographic systems (alphabetic; alphasyllabic; consonantal; syllabic; logographic), and found that, once geographical proximity is taken into account, phonological properties do not relate to orthographic system. We then explored the processing implications of orthographic variation by training a connectionist implementation of the triangle model of reading on the range of orthographic systems whilst controlling for phonological and semantic structure. We show that the triangle model is effective as a universal model of reading, able to replicate key behavioural and neuroscientific results. Importantly, the model also generates new predictions deriving from an explicit description of the effects of orthographic transparency on how reading is realised and defines the consequences of orthographic systems on reading processes.
  • Speed, L., Chen, J., Huettig, F., & Majid, A. (2021). Classifier categories reflect, but do not affect conceptual organization. Journal of Experimental Psychology: Learning, Memory, and Cognition, 47(4), 625-640. doi:10.1037/xlm0000967.

    Abstract

    Do we structure object-related conceptual information according to real-world sensorimotor experience, or can it also be shaped by linguistic information? This study investigates whether a feature of language coded in grammar—numeral classifiers—affects the conceptual representation of objects. We compared speakers of Mandarin (a classifier language) with speakers of Dutch (a language without classifiers) on how they judged object similarity in four studies. In the first three studies, participants had to rate how similar a target object was to four comparison objects, one of which shared a classifier with the target. Objects were presented as either words or pictures. Overall, the target object was always rated as most similar to the object with the shared classifier, but this was the case regardless of the language of the participant. In a final study employing a successive pile-sorting task, we also found that the underlying object concepts were similar for speakers of Mandarin and Dutch. Speakers of a non-classifier language are therefore sensitive to the same conceptual similarities that underlie classifier systems in a classifier language. Classifier systems may therefore reflect conceptual structure, rather than shape it.
  • Tilmatine, M., Hubers, F., & Hintz, F. (2021). Exploring individual differences in recognizing idiomatic expressions in context. Journal of Cognition, 4(1): 37. doi:10.5334/joc.183.

    Abstract

    Written language comprehension requires readers to integrate incoming information with stored mental knowledge to construct meaning. Literally plausible idiomatic expressions can activate both figurative and literal interpretations, which convey different meanings. Previous research has shown that contexts biasing the figurative or literal interpretation of an idiom can facilitate its processing. Moreover, there is evidence that processing of idiomatic expressions is subject to individual differences in linguistic knowledge and cognitive-linguistic skills. It is therefore conceivable that individuals vary in the extent to which they experience context-induced facilitation in processing idiomatic expressions. To explore the interplay between reader-related variables and contextual facilitation, we conducted a self-paced reading experiment. We recruited participants who had recently completed a battery of 33 behavioural tests measuring individual differences in linguistic knowledge, general cognitive skills and linguistic processing skills. In the present experiment, a subset of these participants read idiomatic expressions that were either presented in isolation or preceded by a figuratively or literally biasing context. We conducted analyses on the reading times of idiom-final nouns and the word thereafter (spill-over region) across the three conditions, including participants’ scores from the individual differences battery. Our results showed no main effect of the preceding context, but substantial variation in contextual facilitation between readers. We observed main effects of participants’ word reading ability and non-verbal intelligence on reading times as well as an interaction between condition and linguistic knowledge. We encourage interested researchers to exploit the present dataset for follow-up studies on individual differences in idiom processing.
  • Tourtouri, E. N., Delogu, F., & Crocker, M. W. (2021). Rational Redundancy in Referring Expressions: Evidence from Event-related Potentials. Cognitive Science, 45(12): e13071. doi:10.1111/cogs.13071.

    Abstract

    In referential communication, Grice's Maxim of Quantity is thought to imply that utterances conveying unnecessary information should incur comprehension difficulties. There is, however, considerable evidence that speakers frequently encode redundant information in their referring expressions, raising the question as to whether such overspecifications hinder listeners' processing. Evidence from previous work is inconclusive, and mostly comes from offline studies. In this article, we present two event-related potential (ERP) experiments, investigating the real-time comprehension of referring expressions that contain redundant adjectives in complex visual contexts. Our findings provide support for both Gricean and bounded-rational accounts. We argue that these seemingly incompatible results can be reconciled if common ground is taken into account. We propose a bounded-rational account of overspecification, according to which even redundant words can be beneficial to comprehension to the extent that they facilitate the reduction of listeners' uncertainty regarding the target referent.
  • Vágvölgyi, R., Bergström, K., Bulajić, A., Klatte, M., Fernandes, T., Grosche, M., Huettig, F., Rüsseler, J., & Lachmann, T. (2021). Functional illiteracy and developmental dyslexia: Looking for common roots. A systematic review. Journal of Cultural Cognitive Science, 5, 159-179. doi:10.1007/s41809-021-00074-9.

    Abstract

    A considerable amount of the population in more economically developed countries are functionally illiterate (i.e., low literate). Despite some years of schooling and basic reading skills, these individuals cannot properly read and write and, as a consequence have problems to understand even short texts. An often-discussed approach (Greenberg et al., 1997) assumes weak phonological processing skills coupled with untreated developmental dyslexia as possible causes of functional illiteracy. Although there is some data suggesting commonalities between low literacy and developmental dyslexia, it is still not clear, whether these reflect shared consequences (i.e., cognitive and behavioral profile) or shared causes. The present systematic review aims at exploring the similarities and differences identified in empirical studies investigating both functional illiterate and developmental dyslexic samples. Nine electronic databases were searched in order to identify all quantitative studies published in English or German. Although a broad search strategy and few limitations were applied, only 5 studies have been identified adequate from the resulting 9269 references. The results point to the lack of studies directly comparing functional illiterate with developmental dyslexic samples. Moreover, a huge variance has been identified between the studies in how they approached the concept of functional illiteracy, particularly when it came to critical categories such the applied definition, terminology, criteria for inclusion in the sample, research focus, and outcome measures. The available data highlight the need for more direct comparisons in order to understand what extent functional illiteracy and dyslexia share common characteristics.

    Additional information

    supplementary materials
  • Van Paridon, J., Ostarek, M., Arunkumar, M., & Huettig, F. (2021). Does neuronal recycling result in destructive competition? The influence of learning to read on the recognition of faces. Psychological Science, 32, 459-465. doi:10.1177/0956797620971652.

    Abstract

    Written language, a human cultural invention, is far too recent for dedicated neural
    infrastructure to have evolved in its service. Culturally newly acquired skills (e.g. reading) thus ‘recycle’ evolutionarily older circuits that originally evolved for different, but similar functions (e.g. visual object recognition). The destructive competition hypothesis predicts that this neuronal recycling has detrimental behavioral effects on the cognitive functions a cortical network originally evolved for. In a study with 97 literate, low-literate, and illiterate participants from the same socioeconomic background we find that even after adjusting for cognitive ability and test-taking familiarity, learning to read is associated with an increase, rather than a decrease, in object recognition abilities. These results are incompatible with the claim that neuronal recycling results in destructive competition and consistent with the possibility that learning to read instead fine-tunes general object recognition mechanisms, a hypothesis that needs further neuroscientific investigation.

    Additional information

    supplemental material
  • Van Paridon, J. (2021). Speaking while listening: Language processing in speech shadowing and translation. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Van Paridon, J., & Thompson, B. (2021). subs2vec: Word embeddings from subtitles in 55 languages. Behavior Research Methods, 53(2), 629-655. doi:10.3758/s13428-020-01406-3.

    Abstract

    This paper introduces a novel collection of word embeddings, numerical representations of lexical semantics, in 55 languages, trained on a large corpus of pseudo-conversational speech transcriptions from television shows and movies. The embeddings were trained on the OpenSubtitles corpus using the fastText implementation of the skipgram algorithm. Performance comparable with (and in some cases exceeding) embeddings trained on non-conversational (Wikipedia) text is reported on standard benchmark evaluation datasets. A novel evaluation method of particular relevance to psycholinguists is also introduced: prediction of experimental lexical norms in multiple languages. The models, as well as code for reproducing the models and all analyses reported in this paper (implemented as a user-friendly Python package), are freely available at: https://github.com/jvparidon/subs2vec.

    Additional information

    https://github.com/jvparidon/subs2vec
  • Wolf, M. C., Meyer, A. S., Rowland, C. F., & Hintz, F. (2021). The effects of input modality, word difficulty and reading experience on word recognition accuracy. Collabra: Psychology, 7(1): 24919. doi:10.1525/collabra.24919.

    Abstract

    Language users encounter words in at least two different modalities. Arguably, the most frequent encounters are in spoken or written form. Previous research has shown that – compared to the spoken modality – written language features more difficult words. Thus, frequent reading might have effects on word recognition. In the present study, we investigated 1) whether input modality (spoken, written, or bimodal) has an effect on word recognition accuracy, 2) whether this modality effect interacts with word difficulty, 3) whether the interaction of word difficulty and reading experience impacts word recognition accuracy, and 4) whether this interaction is influenced by input modality. To do so, we re-analysed a dataset that was collected in the context of a vocabulary test development to assess in which modality test words should be presented. Participants had carried out a word recognition task, where non-words and words of varying difficulty were presented in auditory, visual and audio-visual modalities. In addition to this main experiment, participants had completed a receptive vocabulary and an author recognition test to measure their reading experience. Our re-analyses did not reveal evidence for an effect of input modality on word recognition accuracy, nor for interactions with word difficulty or language experience. Word difficulty interacted with reading experience in that frequent readers were more accurate in recognizing difficult words than individuals who read less frequently. Practical implications are discussed.

Share this page