On April 7, 2022,
Emily Blamire presented Linguistic and social factors of Voice Recognition.
The Zoom link for the meeting is here. The password was distributed through the mailing list.
Abstract
This presentation will discuss the preliminary results of two experiments on voice recognition. The first uses time-reversed speech to see if the Language Familiarity Effect (LFE) is sensitive to top-down processing of language group, while the second experiment uses a sentence recognition demasking paradigm to address questions about voice recognition across different native dialects of the same language. The Language Familiarity Effect (LFE), in which listeners are better at recognizing voices in their native language compared to languages in which they do not have fluency (e.g. Thompson, 1987, Goggin et al., 1991) seems to be driven by familiarity with the phonology of the language (e.g. Fleming et al., 2014; Johnson et al., 2018; Lember 2015). However, listeners' perception of various phonological features can shift dependant on the listeners' belief about the talker's group identity. Some examples of vowel perception shifting include when listeners were made to believe the same talkers were alternatively male or female (Strand & Johnson, 1996) or of either Australian or New Zealand descent (Hay & Drager, 2010). What would happen if such shifts of perception were done with voice recognition in the context of the LFE? Preliminary results looking at the effect of top-down in-group processing on the recognition of Dutch and English voices will be presented. Speech was time-reversed so as to mask the language and English-speaking participants were either told the truth or lied to about the language they were hearing. Another area of voice recognition research has focused on the effect of dialect. Vanags et al. (2005) looking at Australian vs British English talkers and Kerstholt et al. (2006) with talkers of Standard Dutch vs Dutch from The Hague have found both found an effect of own-accent bias in voice recognition. However, both studies were unidirectional, with participants from only one of the two accent groups. In comparison, Johnson et al. (2018) found no own-accent bias among either Canadian or Australian participants when presented with both Canadian and Australian English talkers. To explain these results, the authors suggested at the time that listeners must be able to access a level of abstract phonological processing in during voice recognition. Presented in this talk are results from a sentence demasking task in which the same stimuli used in Johnson et al. (2018) was presented to listeners in progressively less amounts of noise. Preliminary results of this experiment show that Canadian listeners are not as good at recognizing sentences produced by the Australian talkers as those produced by the Canadian talkers when they are embedded in noise. This undermines the idea that Canadian listeners can access an abstracted level of phonology with equal accuracy when listening to both sets of voices. Suggestions for how to interpret these results with Johnson et al. (2018)'s findings and the planned next experiment to further explore this issue will be discussed.