top of page

Josh P Davis and Nikolay Petrov

Face and Voice Recognition Lab

School of Human Sciences

Institute of Lifecourse Development

University of Greenwich

London SE10 9LS

7 February 2021

Advertised project name: Can you "Beat the Computer" at matching facial composites with a photo of the person they are supposed to depict?

Thanks to those 28,791 participants who supported our Innovate UK funded, E2ID collaboration between the University of Greenwich and VisionMetric Limited. In total, participants provided almost 1.5 million EFIT6-photo similarity ratings (n = 1,494,667). See previous blogs here and here. We are guessing that most were attracted to the project by the prize draws. In total, 18 randomly selected participants received a £50 Amazon voucher each.

What is E2ID?

Deep learning strategies for accurate identification from facial composite images (E2ID) (Solomon, Gibson, & Davis, 2019-2021) was supported by an Innovate UK grant ( awarded to VisionMetric Limited and the University of Greenwich. The E2ID project aimed to develop a radical, new approach to achieving fast, accurate automatic matching of facial composite images to police mugshot databases. The expected developments should give rise to improved investigative procedures for international police forces leading to greater case closure and a reduced demand on police resources.

The role of Josh Davis and Nikolay Petrov at the University of Greenwich was to assist “with developing neural (deep learning) procedures that successfully map the human cognitive processes implicit in the recognition of facial composite images. In this way, machine behaviour can be tailored for the first time to achieve composite face recognition” in a roughly similar manner to humans.

So, who were these 28,791 participants?

Almost all will have been invited to take part while completing our fun 14-trial Could you be a Super-Recogniser Test?that is available for members of the public (Davis, 2019). Participants completed 14 trials, and just before receiving their scores were asked if they wanted to have a go at “beating the computer” in new research.

If participants were interested in contributing, they then provided informed consent and demographic data before completing the 50 trials and being debriefed. Once they were debriefed, they received their Could you be a Super-Recogniser scores and were also asked if they wished to take part in additional tests to better determine their face recognition ability (Cambridge Face Memory Test: Extended (Russell et al., 2009), Glasgow Face Matching Test (Burton et al., 2010), Old-New 30-60 Face Recognition Test). These tests were in a separate link so that it was not possible to match these data to the E2ID data.

Note: The E2ID project link was open from 27 January 2020 to 5 February2021. In this time, approximately 377,000 participants completed the Could you be a Super-Recogniser Test. The E2ID participants represent 7.6% of this number.

Scores on the Could you be a Super-Recogniser Test by the E2ID participants can be found in Figure 1 (average = 10.09, maximum score = 14). Mean values are slightly higher than the mean of all participants who normally complete the test (M = 9.64, Davis, 2019), suggesting that despite not having received their scores this research attracted those with higher face recognition ability, or motivation to try in the test.

Figure 1: Scores by E2ID participants on the Could you be a Super-Recogniser Test (out of 14)

Note: This ‘space’ is available for future suitable research projects.

Participants provided demographic data for each attempt at ‘beating the computer’. We cannot tell if any took part more than once due to data privacy rules. The ethnicity categories were selected as being some of the most common in previous research.


Male: 11,772

Female: 16,514

Other: 194

No gender information provided: 311


Mean: 35.68

Standard deviation: 10.99

No age information provided: 525


White: 21,140

Black: 227

Asian Sub-Continent: 372

Mixed Ethnicity: 975

Hispanic: 1,014

Arab or Middle Eastern: 213

Chinese: 301

Japanese: 204

Korean: 842

Turkish: 183

Kurdish: 16

Thai: 12

Filipino: 124

Iranian/Persian: 50

Other South American: 187

Other North American: 46

Other European: 1,653

Other East Asian: 61

Other West Asian: 10

Other South East Asian: 115

Other Australasia: 25

Other African: 30

Other Ethnicity: 594

No ethnicity information provided: 311

How did the ‘Can you beat the computer study’ work?

In this project, participants were invited to provide a series of 50 randomly allocated EFIT6 facial composite-facial photo similarity ratings (1: not at all similar – 7: highly similar) to pairs of images that were sometimes of the same person, sometimes of two different people. These pairs of images were provided by VisionMetric in 21 batches. The facial composites had been automatically generated by the EFIT6 system based on various parameters, designed to vary in the degree to which their appearance matched the real photo of the target. Some were depicted with paraphernalia (jewellery, facial hair, hats etc.).

In total, the 28,791 participants provided 1,494,667 similarity ratings to pairs of images.

It was clear from previous research that participants would expect to receive a “score” on completion of the project. As similarity ratings cannot be directly converted into a measure of accuracy, we created an artificial score that from feedback and pilot research seemed to satisfy all the low hundreds of participants who eventually requested further information by e-mail.

A score of 1 was given if ratings of 1-3 (different) or 5-7 (same) conformed with the true identity(ies) of the composite-photo pairs. All ratings of 4 and non-conforming responses received a score of 0. We realise this is an imperfect method, but it probably approximates the actual belief of participants.

From these calculations, participants received a score out of 50. Note – scores on the test did not influence likelihood of winning a £50 prize. This was entirely random.

Due to the random allocation system, one participant’s set of 50 trials may have been far harder to rate than a second participant's, so direct comparison of scores is not advisable. Nevertheless, in the spirit of openness, Figure 2 depicts the range of ‘scores’ calculated using this method. If you achieved a very low score, you may have been issued with very hard pairs to rate. On the other hand, the slightly anomalous peak in scores of 50 in comparison to the negative skew of the remaining data suggests that these participants received relatively easy pairs to rate.

Figure 2: Converted E2ID paired similarity scores (out of 50)

Pilot study

Prior to starting the E2ID study, we conducted a pilot study at the end of 2019 (n = 5256) aiming to identify the ideal number of pairs of images to include in a set to be rated to ensure maximum response numbers (i.e., not to discourage participants by asking them to rate too many images). Note to psychologists, we varied the information participants received about the demands of the study to ask them to rate 10, 20 or 50 image-pairs (see blog written before pilot data collection ended).

Not surprisingly, participants were more likely to volunteer if asked to rate 10, then 20, then 50 pairs. However, the drop in participant numbers was compensated by the increase in the total numbers of ratings – so the final study contained 50 trials.

We also assessed the ideal number of ratings that should be given to each pair in a set. In other words, how much extra value is given to the final data by increasing the number of participants providing similarity ratings per pair? We think we found the ‘sweet spot’ and hope to publish these data eventually.


Burton, A. M., White, D., & McNeill, A. (2010). The Glasgow face matching test. Behavior Research Methods, 42, 286–291. DOI: 10.3758/BRM.42.1.286

Davis, J. P. (2019). The worldwide impact of identifying super-recognisers in police and business. The Cognitive Psychology Bulletin; Journal of the British Psychological Society: Cognitive Section, 4, 17-22. ISSN: 2397-2653.


Russell, R., Duchaine, B. & Nakayama, K. (2009). Super-recognizers: People with extraordinary face recognition ability. Psychonomic Bulletin & Review, 16, 252–257.

Solomon, C., Gibson, S., & Davis, J. P. (2019-2021). Deep learning strategies for accurate identification from facial composite images (E2ID). Innovate UK.


Recent Posts

See All


bottom of page