Updated: Nov 18


Professor Josh P Davis

BSc MSc PhD AFBPsS FHEA CPsychol

Face and Voice Recognition Lab

School of Human Sciences

Institute of Lifecourse Development

University of Greenwich

London SE10 9LS

super-recognisers@greenwich.ac.uk

https://www.gre.ac.uk/pg/eduhea/psych


25 October 2022


Results of Voice Identity Processing Tests


Ryan Jenkins (PhD Student: University of Greenwich)

Professor Josh P Davis (Supervisor: University of Greenwich)


Since 2018, voice identity processing tests have been included within our research programme, primarily to support the studies of PhD student, Ryan Jenkins. Ryan has published the first part of his PhD relating to this voice identity processing research in 2021 (Jenkins et al., 2021 – read a summary here). To help volunteers from the University of Greenwich Face and Voice Recognition Lab Volunteer Pool understand how their scores compare to other members of the pool, the distributions of the test scores for each of the key voice tests are shown in Figures 1 – 7. From the histograms, you should get some idea of how you may rate your voice recognition skills.


The numbers of participants completing each voice test in our battery are listed in Table 1. In November and December 2022, we will be inviting members of the volunteer pool to take all the tests in the battery, so that we can better understand the relationships between the ability to recognise faces and voices, as well as some of the components underlying these skills (e.g., Short-Term Face and Voice Memory, Simultaneous Face and Sequential Voice Matching Ability, Long-Term Face and Voice Memory, and Unfamiliar and Familiar Face and Voice Recognition Ability).


This is partly why the results of the Famous Face Recognition Test can be found in this blog, and not in our parallel blog in which you will find similar histograms of scores on face identity processing tests (click here). Indeed, this test was developed during Ryan’s PhD.


Table 1: Participant numbers who have taken each test.


As we have consistently found in our face identity processing research, we predict that in general, on these tests, members of the volunteer pool will tend to generate scores that are higher than those expected from a representative sample of the population. This is likely due to a self-selection bias, as participants who are most interested in the challenges of the tests, will be those who tend to score the highest.


Therefore, if research has been published in which it is likely that a more representative sample has contributed, we have superimposed lines, in green, that represent the mean scores of the sample of participants reported in those studies (e.g., Bangor Voice Matching Test, Glasgow Voice Memory Test). We have also included two lines, in red, representing the values one standard deviation (SD) above and one SD below the mean from those samples. Approximately, 68% of the population would be expected to generate a score that falls within those two lines, whereas, in contrast, only 16% would be expected to score above the upper red line to the right of each figure, and 16% below the lower red line on the left.


Super-Voice-Recogniser?


If your scores on these tests are consistently within the top right sector of each figure, it is possible that you may be a super-voice-recogniser. We have published research (Jenkins et al., 2021 – read a summary here), based on three tests, suggesting that some people may possess highly superior skills at voices, in the same way as others may be super-face-recognisers. Intriguingly, the research also found that super-face-recognisers were significantly more likely than those with typical face recognition ability to score in the highest range on the voice recognition tests. This research therefore provided preliminary evidence that there may be a commonality linking superior skills in the voice and face modes. Nevertheless, despite significantly outperforming control, approximately 7% of the super-face-recognisers generated superior voice identity processing skills on at least two out of three tests.


The final PhD research to be conducted in the next few months will be exploring these themes in more depth with the inclusion of additional tests not currently listed in this blog. Some of the listed tests have also been updated with new stimuli.


Updated voice test platform, improved internet connectivity, and reduced problems


We are aware that many participants who took the voice tests previously, found that the very shortest voice samples did not sometimes reliably play, which adversely affected their performances. This was mainly due to poor connectivity between the platform on which the tests were loaded, and the participant’s browser or operating system. Indeed, many participants suffered no problems. Nevertheless, we have loaded some of the key tests onto a new platform (Gorilla). Pilot studies have suggested problems are now rare – they occur at about the same rate as our face tests (less than 5%).


Therefore, we will be inviting everyone in the volunteer pool to take the key voice tests, including those who reported breakdowns previously. If you did suffer IT problems but did not report it previously, please let us know about this when you take the tests.


Current Voice identity processing battery and participant results


Greenwich Voice Recognition Test Version 1 (GreVRT88: V1)


Figure 1 presents the data of participants collected from the volunteer participant pool for the Greenwich Voice Recognition Test. The data from members of the public (n = 354) were collected after the test was featured on BBC1 Crimewatch Live (Extra blog post here: https://tinyurl.com/2p95peac).


The histogram below illustrates the results of Version 1 of the Greenwich Voice Recognition Test (GreVRT88). A second, 104-trial version of the test has been developed (GreVRT104) that can be taken.


Figure 1: Greenwich Voice Recognition Test (GreVRT88: V1) volunteer participant pool scores


Figure 2: Bangor Voice Matching Test (BVMT) volunteer participant pool scores

Note: BVMT “typical population” data (n = 299) were compiled from three articles (Johnson, McGettigan, & Lavan, 2020; Mühl & Bestelmeyer, 2019; Mühl, Sheil, Jarutytė, & Bestelmeyer, 2018).



Figure 3: Glasgow Voice Memory Test: Voice (GVMT:V) section volunteer participant pool scores

Note: GVMT:V “typical population” data (n = 1,120) were reported in Aglieri et al. (2017)



Figure 4: Glasgow Voice Memory Test: Bell (GVMT:B) section volunteer participant pool scores

Note: GVMT:V “typical population” data (n = 1,120) were reported in Aglieri et al. (2017)



Figure 5: Famous Voice Recognition Test (FVRT) volunteer participant pool scores



Figure 6: Long-Term Voice Memory Test (LTVMT) volunteer participant pool scores


Figure 7: Famous Face Recognition Test (FFRT) volunteer participant pool scores



Tests described in this blog


Greenwich Voice Recognition Test. This test is a primary output from the PhD research of Ryan Jenkins. The test was featured on BBC1 Crimewatch Live (Extra blog post here: https://tinyurl.com/2p95peac). The version of this test described in this blog is an 88-trial test that measures unfamiliar short-term voice memory ability. Participants are first required to listen to a series of voices during a learning phase and then immediately complete a recognition phase. During this recognition phase, across 88-voice trials participants are required to determine whether a voice was old (heard during the learning phase) or is a brand-new voice. Note – this test is currently being updated to a longer version and is being sent out to participants in the University of Greenwich Face and Voice Recognition Lab volunteer participant pool. When sufficient data has been collected, this blog will be updated with new scores.


Bangor Voice Matching Test (Mühl et al., 2018). This 80-trial standardised test measures unfamiliar voice matching abilities. Participants are required to decide whether pairs of voice clips are of the same person (matched, 40-trials) or whether they are of two different speakers (mis-matched, 40-trials). We ask participants to complete different tests measuring similar abilities (memory or matching) as this allows us to determine a reliable estimate of individual differences in voice identity processing ability.


Glasgow Voice Memory Test (Algieri et al., 2017). This 32-trial standardised test measures both unfamiliar short-term voice memory and short-term sound (bell) memory ability. The test is split into two parts. In Part 1, participants listen to a series of voices during a learning phase and then immediately complete a recognition task whereby they are required to determine whether 16 voices are old voices (heard in the learning phase) or are new (brand-new voice heard only in the recognition phase). Part 2 mirrors the same design as Part 1, but uses bell sounds instead of voices.


Famous Voice Recognition Test. This test was designed as a part of PhD student, Ryan Jenkins’ research. This is a test of familiar voice memory, in which the task is to identify the famous voices that are heard during the test. We ask participants to complete different tests measuring different abilities (unfamiliar vs familiar) as this allows us to determine a reliable estimate of voice processing ability in general, particularly given the differences in ability to recognise a voice that is familiar and a voice that is unfamiliar. Note – this test is currently being updated and is being sent out to participants in the University of Greenwich Face and Voice Recognition Lab volunteer participant pool. When sufficient data has been collected, this blog will be updated with new scores.


Long-Term Voice Memory Test. This test was designed as a part of PhD student, Ryan Jenkins’ research. This is a test of long-term unfamiliar voice memory. This test is split into two parts. In Part 1, participants are required to learn a series of voices. After a longer delay period, participants complete Part 2, in which they are tasked at determining whether a series of voices are old (heard during Part 1) or are a brand-new voice to Part 2. We ask participants to take a long-term voice memory test as this type of task is typical to that of real-life, for example with earwitness testimonies and voice identity line-ups. Here, it is likely that if a witness of a crime hears the voice of a perpetrator, there will be a delay before they are asked to identify the voice from a potential line-up. Note – this test is currently being updated and is being sent out to participants in the University of Greenwich Face and Voice Recognition Lab volunteer participant pool. When sufficient data has been collected, this blog will be updated with new scores.


Famous Face Recognition Test. This test was designed as a part of PhD student, Ryan Jenkins’ research. It was designed as a comparison for the Famous Voice Recognition Test. This is a test of familiar face memory, in which the task is to identify the famous faces that are presented during the test. We ask participants to complete tests of familiar face (and indeed familiar voice) recognition as research has found similarities in how familiar faces and voices are recognised and processed in the brain. As such, by looking at both familiar face and voice recognition abilities, we are able to get a bigger picture of overall identity recognition in general. Note – this test is currently being updated and is being sent out to participants in the University of Greenwich Face and Voice Recognition Lab volunteer participant pool. When sufficient data has been collected, this blog will be updated with new scores.


Ethics associated with out volunteer pool database


All participants give their consent for their data to be stored. In a second repository the participants' email addresses are also stored so that they can continue to be invited to participate in future research. When the EU General Data Protection Regulation (GDPR) went into effect in 2018, all participants were again informed by email about saving their data. If there was no response, the corresponding data was deleted. Those participants who decide to take part in the tests since then immediately receive the information about GDPR and about the possibility to withdraw their consent at any time.


More information about our ethical and data protection procedures can be found here.



References


Aglieri, V., Watson, R., Pernet, C., Latinus, M., Garrido, L., & Berlin, P. (2017). The Glasgow Voice Memory Test: Assessing the ability to memorise and recognise unfamiliar voices. Behavior Research Methods, 49(1), 97-110. https://doi.org/10.3758/s13428-015-0689-6


Jenkins, R. E., Tsermentseli, S., Monks, C. P., Robertson, D. J., Stevenage, S. V., Symons, A. E., & Davis, J. P. (2021). Are super-face-recognisers also super-voice-recognisers? Evidence from cross-modal identification tasks. Applied Cognitive Psychology, 35(3), 590-605. https://doi.org/10.1002/acp.3813


Johnson, J., McGettigan, C., & Lavan, N. (2020). Comparing unfamiliar voice and face identity perception using identity sorting tasks. Quarterly Journal of Experimental Psychology, 73(10), 1537–1545. https://doi.org/10.1177/1747021820938659


Mühl, C., & Bestelmeyer, P. E. G. (2019). Assessing susceptibility to distraction along the vocal processing hierarchy. Quarterly Journal of Experimental Psychology, 72(7), 1657–1666. https://doi.org/10.1177/1747021818807183


Mühl, C., Sheil, O., Jarutytė, L., & Bestelmeyer, P. E. G. (2018). The Bangor Voice Matching Test: A standardized test for the assessment of voice perception ability. Behavior Research Methods, 50, 2184–2192. https://doi.org/10.3758/s13428-017-0985-4



Important Links


Super-recognisers website Twitter ORCID


Free to download publications Research Gate TV and media




311 views