The English EQ-5D-5L study was conducted in 2012-2013, in the first wave of EQ-5D-5L valuation studies. A set of problems encountered in the cTTO data of this first wave of valuation studies led to the subsequent refinement of the valuation protocol. The updated version of the protocol elicit values using the same valuation techniques, but the way of implementing the tasks has improved. Based on the recognition that accuracy of collected responses and level of detail provided by interviewers in their explanation of the task varied, EuroQol introduced a quality control (QC) procedure to review protocol compliance and interviewer effects. A post hoc analysis of data quality is reported here to examine interviewer performance in the English EQ-5D-5L study. The key findings of the QC report are presented on page 3 (table 2, reporting protocol compliance) and page 13 (figure 18, depicting interviewer effects in collected responses). The picture that emerges from the data is as follows: 1. Interviewer Compliance with protocol was low (page 5, table 2). The team would have kept only 138 of the 998 interviews if they had followed the current guidance. The remaining 860 interviews would not be part of the final dataset, because the low levels of protocol compliance would have prompted early intervention. For every interviewer who failed to meet the QC requirements initially (i.e. after 10 interviews), this batch of 10 interviews would have been discarded and the interviewer would have been retrained. They would only be allowed to contribute new data if improved performance was confirmed. 2. Strong interviewer effects are present in the data (page 21, fig. 16). Figure 18 shows little similarities in the distribution of collected responses across interviewers, suggesting interviewer effects. The particular way of an interviewer to ask questions or accept an answer has influenced the responses. Strong clustering is found in the responses from some interviewers. This suggests censoring/satisficing/lack of task understanding as discussed by Stolk et al (2019). Interviewer and clustering effects usually dissolve over time as interviewer performance improves with experience and received feedback. In England that has not happened because the fieldwork was completed by 50 interviewers who completed on average 20 interviews. This means that little benefit could be incurred from the feedback offered by the study team. These findings need to be interpreted with care. Presence of protocol violations does not automatically imply bad data because it is defined on procedural grounds. For instance, protocol is violated if the worse than dead task is not explained in the wheelchair example (applicable to 70% of the interviews), but it is possible that the explanation has been given later, when a respondent was confronted for the first time with a health state worse than dead. Yet, the opposite is also true. If a respondent did not value any state as worse-than-dead, one would like to confirm that the respondent was aware of the possibility to give negative values in order to distinguish censored observations from responses that reflect true values. Similarly, interviewer effects and clustering of data show that collected responses have low accuracy. Depending on treatment such observations receive when they are modelled, the resulting values may nevertheless be found to reflect true preferences well. The careful way in which the English team searched for behavioural explanations for patterns in the data to inform their modelling approaches suggests higher levels of external validity than one might expect considering only face validity of the data.