The aim of this study was to determine the patterns of workplace-based assessment integration throughout postgraduate medical training curricula in six training bodies. Our main findings demonstrate that while the level of implementation has been varied, the majority of trainees have experienced at least one WBA during the academic year.
The picture that has emerged in this observational study compares in many ways with the issues identified internationally; particularly those related to ineffective feedback and limited formative impact. We identified that the documentation of effective written feedback was limited; however, as these assessments take place in real-time with the trainer and trainee present, verbal feedback, which is not then transferred to the assessment forms, may also take place. A number of international institutions have implemented WBA smart-phone and tablet ‘apps’ which allow for real-time completion and uploading of the assessment feedback.
Another barrier to the provision of feedback in our study may have been the lack of an explicitly-titled free-text ‘feedback’ section; on these assessments the free text section was titled ‘comments’ and therefore was interpreted by some trainers as comments on the case, not on the trainee performance.
In our study, both at BST and HST level, trainees were more likely to complete DOPS assessments than the mini-CEX or CbD. This finding is in keeping with a UK study of dermatology trainees where the authors reported that 138 trainees completed 251 DOPS compared with 142 mini-CEX assessments (Cohen et al. 2009). In this study respondents reported that the Mini-CEX and Multisource Feedback (MSF) tended to feel more ‘artificial’ than DOPS; they also reported dissatisfaction with the quality of feedback provided on all assessments, despite an overall positivity about the benefits of WBAs. While there is limited empirical research exploring trainer and trainee preferences regarding assessment, it may be that trainers and trainees perceive DOPS as a more objective measure of performance as opposed to the more subjectively-perceived assessments of, for example, communication and professionalism. However, it is interesting to note that in a 2009 study of psychiatry trainees—for whom procedure-based WBAs are not usually required—Menon et al. (2009) also reported that trainees were ‘unimpressed’ with the introduction of these assessments, querying their reliability, validity and impact on the quality of training.
Our study found that the majority of WBAs took place in the second half of the year. This pattern, along with the limited provision of written feedback and follow-up assessments, appears to point towards a limited use of these assessments to inform learning and development. During the implementation of WBA in the UK, one 2011 study of paediatric trainees (Bindal et al. 2011) reported that WBAs were still viewed as a ‘tick-box’ exercise. Menon et al. (2012) reported that psychiatry trainers and trainees (Menon et al. 2009) understood that the introduction of WBA was both driven by a desire to improve training but that it was also ‘politically driven’; comments from these trainees also referenced the ‘tick-box exercise’ designed purely to fulfil end-of-year assessment requirements. In a recent review of the issues underlying the problems encountered in WBA implementation Swayamprakasam et al. (2014) also pointed towards the need for widespread communication strategies to inform—or re-inform—the understanding of the purpose of WBA.
The potential ‘floor’ and ‘ceiling’ effect of WBA also warrants further investigation. In this study, the low number of assessments documenting a competence that was ‘borderline’ or ‘below expectation’ raises a number of issues around ‘failure to fail’. The reluctance and anxiety of trainers around the delivery of negative feedback is well documented (Kogan et al. 2012) as are issues with the rating systems used to structure this feedback (Hassell et al. 1035). In our assessments, the use of an ‘expectations’ rating system (i.e. ‘above expectation’, ‘meets expectations’) in Mini-CEX and CbD assessments, without explicit reference to curriculum outcomes or competencies, may also have been perceived as overly-subjective and less conducive to learning.
This is the first large-scale study of WBA implementation in Ireland. The methodology employed to conduct the study was rigorous and quality checks were implemented to ensure the quality and accuracy of the data. The study provides and overview of the varied integration of the assessments since the introduction of the tools and has highlighted similar issues to those identified internationally. The study was designed to provide a thorough background in developing an extensive programme of research on WBA in the Irish postgraduate medical education context and will form the basis of a large in-depth qualitative study to explore the value of WBA to both trainers and trainees. The findings have also highlighted a number of areas for further development of the assessment, particularly regarding the implementation and assessment of same. One of the main limitations of the study lies in the evaluation of the quality of feedback; only written feedback was extracted which may not accurately or fully reflect the quality or richness of verbal feedback provided at the end of the workplace-based assessments.