Computer-based simulation games provide an environment to train complex problem-solving skills. Yet, it is largely unknown how the in-game performance of learners varies with different levels of prior knowledge. Based on theories of complex-skill acquisition (e.g., 4C/ID), we derive four performance aspects that prior knowledge may affect: (1) systematicity in approach, (2) accuracy in visual attention and motor reactions, (3) speed in performance, and (4) cognitive load. This study aims to empirically test whether prior knowledge affects these four aspects of performance in a medical simulation game for resuscitation skills training. Participants were 24 medical professionals (experts, with high prior knowledge) and 22 medical students (novices, with low prior knowledge). After pre-training, they all played one scenario, during which game-logs and eye-movements were collected. A cognitive-load questionnaire ensued. During game play, experts demonstrated a more systematic approach, higher accuracy in visual selection and motor reaction, and a higher performance speed than novices. Their reported levels of cognitive load were lower. These results indicate that prior knowledge has a substantial impact on performance in simulation games, opening up the possibility of using our measures for performance assessment.
Computer-based simulation games (CBSG) are effective learning environments for complex skills. As simulations, they approximately replicate the complexity of real-life situations (Koivisto, Niemi, Multisilta, & Eriksson, 2017). As computer games, they provide a package of problems that are causally connected, based on learners’ interaction with the game (Kiili, 2005). In this simulated problem-solving environment, learners can train specific professional skills in areas such as aviation, business management, and medicine (Dankbaar et al., 2016; De Freitas, 2006; Hernández-Lara, Perera-Lluna, & SerradellLópez, 2019). However, CBSGs face a challenge in that the performance of a learner in the game is difficult to assess via traditional measurements such as achievement tests (Kang, Liu, & Qu, 2017). This challenge is mainly due to the open-ended nature of CBSGs (Squire, 2008), which allows for a large number of different behaviors. Therefore, recent research has focused on tracking users’ in-game behaviors by looking at game data such as serious game analytics (Kang et al., 2017; Loh, Sheng, & Ifenthaler, 2015; Wallner & Kriglstein, 2013). These studies identified several limitations: Data analysis without involving educational theoretical principles often fails to fully account for students’ performance (Kang et al., 2017), game-logs without translation into high-level meaningful actions can yield confounding information (Zhou, Xu, Nesbit, & Winne, 2010), some important factors such as timing cannot be explained by analyzing sequences of events only (Clark, Martinez-Garza, Biswas, Luecht, & Sengupta, 2012), and empirical studies about how game data can be informative for performance assessment are scarce (Hou, 2015; Kang et al., 2017).