top of page

How to evaluate training in interviewing

Continuing the critical look at the science of training in investigative interviewing.

This article continues the critical analysis of police training in investigative interviewing. The focus here is on how to evaluate training

How to evaluate training in interviewing

There are two main ways to conduct evaluations.

First, the performance of untrained officers can be compared to that of trained officers. This is a classic experimental design involving a control group (untrained) and an experimental group (trained).

Second, the performance of trained officers can be compared to the standards and protocols as set out in training materials. In this type of study, the performance of the trained group is scored in terms of how well it meets specified standards. A control group is not required as the standards set the parameters for assessing performance.

Determining a measure of ‘performance’ can vary between studies. For example, one study might focus on a single aspect of training, whilst another study might focus on the entire interview process.

There are alternative ways to conduct evaluations, but these rarely result in research findings that are clear and actionable. For example, Cunningham (2010) conducted an evaluation of investigative interviewing training in New Zealand by asking officers (both untrained and trained) to self-report on their interviewing practices. Cunningham acknowledges that:

“This evaluation did not objectively assess whether Level 1 training and accreditation has had any effect on investigative interviewing practice. To do this, a sample of interviews conducted by staff who had been trained and staff who had not would need to be assessed against interviewing standards that the training addresses and compared for differences…. In the current study self-reported frequency of use of aspects of the PEACE framework was requested in the questionnaire to assess level of use of interviewing techniques.” (p.15)

Cunningham then states:

“It should be noted that asking staff to report how frequently they used each aspect of interviewing is not an objective measure of their actual use. It may be that those who had been trained (and those that are accredited) were more aware of what constitutes good practice in investigative interviews and therefore responded in a more positive way than is actually the case in practice. A more objective approach to determining whether any application of the PEACE interviewing techniques had taken place…. may not necessarily have found such promising results.” (p.19)

As a consequence of these serious limitations, the findings from self-report evaluations such Cunningham’s study, must be interpreted with extreme caution. It is possible that trained interviewers are ‘better’ on some measure than untrained interviewers, but it is also possible that they know which answers to give when asked about their performance. What someone says they do, may not accurately reflect their behaviour.

Dr Stephen Moston


Cunningham, S. (2010). Evaluation of the implementation of investigative interviewing training and assessment (Level 1) final report.

Suggested citation (APA 7)

Moston, S. (2021, January 26). How to evaluate training in interviewing. Forensii. Feedback corner

I’d love to hear what you think about this post. Do you agree with me? Do you disagree?

You can write to me - - and let me know what you think.

Other resources

If you’ve enjoyed this Blog post, I invite you to check out Forensii offers online forensic psychology education and is the essential resource for information on investigative interviewing.

Blog 05 How to evaluate training in inve
Download • 132KB

bottom of page