



Theme
10II Simulator
INSTITUTION
School of Medicine, University of St Andrews
Learning surgical skills by simulation is a safe and effective adjunct to training in the operating theatre1. Access to simulation equipment remains limited however, with existing simulation equipment generally bulky, expensive and largely confined to institutions1,2. To help facilitate simulation training, portable 'take-home' simulators have been proposed3. It is recognised, however, that to maximise learning, a degree of incentivisation is required. One potential way of motivating practise is by encouraging competition in the form of peer-to-peer performance comparison. This study explores the effectiveness of an on-line leader board on skills improvement during a period of take-home simulation training.
Twenty pre-clinical medical students were randomised into two groups of ten. Both groups were given a 'take-home' laparoscopic ('key-hole' surgery) simulator (eoSim, eoSurgical Ltd., Edinburgh, UK) and shown videos of two simulated tasks: peg-threading and precision cutting (as demonstrated in the video).
Both groups performed these tasks under controlled conditions and their baseline scores were recorded. All participants were encouraged to practice on the simulators at home, before returning for a repeat standardised performance assessment after two weeks.
Figure 1: An example of the eoSim surgical simulator used in the study.
The control group had access to a private Facebook page with the study instructions and demonstration videos. The intevention group had a different private Facebook page, which included an on-line leader board of performance scores. Users in this group were encouraged to share videos of their performances each time they practised. Their scores were then uploaded onto the leader board and shared between all users of the intevention group (see example leader board below). The groups ran in series to blind the first (control) group to the existence of the leader board in the intevention group.
Outcome measures were:
- Degree of improvement in skills between baseline and final assessment in both groups - does a period of 'take-home' simulation improve skills?
- Difference between groups - does performance score comparison result in a greater change in performance?
- Frequency of practise - does a leader board encourage more frequent practise?
- Subjective experience with an online leader board (assessed by questionnaire in the intevention group).
Figure 2: On-line leader board of time to complete peg-threading task (in seconds) by user.
This study demonstrated a significant improvement in peg-threading skills during a period of take-home simulation training. While subjective feedback supported the usefulness of an online leader board of skills for peer-to-peer comparison to incentivise simulator use, the online leader board group did not demonstrate greater objective improvement when compared to controls.
This study's limitations include small numbers in each group and relatively poor engagement with the cutting task in comparison to the peg-threading task. Furthermore, it became apparent - after the study - that some members of the control group had realised they could privately share videos of their performances online with other members of the control group. It is thought that this created a similar degree of motivation to practise as the formal leader board, thus confusing the analysis. Further work will involve repeating the study with larger numbers, assessing trainee surgeons in addition to medical students and preventing the control group from having any contact with other members of that group (blinding to other participants).
This study does support the effectiveness of a period of take-home simulator training and suggests that peer-to-peer skills comparison could be a useful addition this emerging training paradigm.
- Khera, G., Milburn, J., Hornby, S. and Malone, P. (2011). Simulation in Surgical Training. Association of Surgeons in Training.
- Milburn, J. A., Khera, G., Hornby, S. T., Malone, P. S. C. and Fitzgerald, J. E. F. (2012). Introduction, availability and role of simulation in surgical education and training: review of current evidence and recommendations from the Association of Surgeons in Training. Int J Surg 10, 393–398.
- Korndorffer, J. R., Bellows, C. F., Tekian, A., Harris, I. B. and Downing, S. M. (2012). Effective home laparoscopic simulation training: a preliminary evaluation of an improved training paradigm. The American Journal of Surgery 203, 1–7.
- Woodrum, D. T., Andreatta, P. B., Yellamanchilli, R. K., Feryus, L., Gauger, P. G. and Minter, R. M. (2006). Construct validity of the LapSim laparoscopic surgical simulator. The American Journal of Surgery 191, 28–32.
*Royal Hospital for Sick Chidlren, Edinburgh, UK
**Alder Hey Children's Hospital, Liverpool, UK
With gratitude to D Currie for invaluable statistical support.
Did the entire cohort of participants benefit from taking home simulators for two weeks?
Wilcoxon Signed rank tests comparing initial and final scores across both skills showed that the the participants’ peg threading times improved significantly (p < 0.05). There was however no statistical difference between the initial and final precision cutting scores (p = 0.55).
Figure 3: Wilcoxon Signed Rank test to show improvement in peg-threading scores for both groups, combined, over the two week period.
Figure 4: Wilcoxon Signed Rank test to show improvement in precision cutting scores for both groups, combined, over the two week period.
Did the leader board make participants’ skills improve more?
Improvement in each task was calculated as 'initial score minus final score'. There was no statistical improvement seen between groups in the peg threading task (p = 0.58). Interestingly, the control group (without the leader board) improved significatly more than the study group in the peg threading task (p = 0.02).
Figure 5: Mann-Whitney U test to show changes in peg-threading scores for both groups, compared, over the two week period.
Figure 6: Mann-Whitney U test to show changes in precision cutting scores for both groups, compared, over the two week period.
Did the leader board encourage participants to practise more?
Frequency of practice at both tasks was measured across the two groups, and recorded by the software within the laparoscopic simulators.
Figure 7: A bar chart to show frequency of practice in both tasks, across the two groups, with mean frequency of practice indicated.
Control group:
- Mean frequency of practise: 18.9 times (s.d. = 21.92).
- 40% practised more than 20 times.
- Three participants did not practise at all.
Leader board group:
- Mean frequency of practise: 18.8 times (s.d. = 23.52).
- 20% practised more than 20 times.
- One participant did not practise at all. One participant did not use the software to record their scores and so was excluded from this analysis.
A Mann-Whitney U Test demonstrated no significant difference in frequency of practise between the two groups (p=0.720).
Figure 8: Mann-Whitney U test to show frequency of practice for each group, over the two week priod.
Subjective experience of the leaderboard:
Figure 9: Study group participants' responses to a questionnaire about their experiences with the leader board (n=10).