6 Exploring virtual reality for teacher training, materials development and student engagement

Paul Driver; Nicola Walshe; Siân Shaw; and Suzanne Hughes


Though it might be said that virtual reality has now ‘arrived’, in the popular sense, its evolution can be traced back through time as a constant struggle to create more visually immersive experiences. From the panoramic paintings of the nineteenth century and early experiments in stereoscopic photography, the history of immersive media has been a steady march towards the goal of creating convincing simulacra that commandeer our perceptual systems, persuading us that what we are seeing is real. Virtual reality (VR) embodies the current stage in the evolution of this process.

VR has made a remarkable recovery from the premature technology, disappointing results and over-hyped promises of its previous incarnations. The global VR market is expected to reach 49.7 billion US dollars by 2023, a dramatic rise from 3.13 billion in 2017 (Orbis Research, 2018), and VR appears to be on the brink of widespread public acceptance. However, although VR technology has advanced considerably, the heavy, uncomfortable and often expensive headsets required for an immersive experience (Laurell et al., 2019), are likely to delay widespread adoption in the short term. Currently, VR seems best suited for specialised use cases, with education and training being two primary examples.

VR is being increasingly deployed in educational settings, especially in the Higher Education sector. Its use is now well established across the fields of science, engineering and medicine. Early barriers to adoption, such as the prohibitively high cost of the technology, are rapidly diminishing, and VR, is now becoming a viable tool to support teaching and learning. However, as an emerging technology that has only recently begun to gain popular traction, the study of the potential benefits of VR is still in its infancy.

The aim of this chapter is to contribute to the understanding of the potential of VR in teacher training and nurse education by providing examples, outlining theoretical considerations, and forging a methodological and technical path to guide others undertaking similar work. In addition, the authors apply VR to tackle immediate barriers to learning and teaching. In nursing, these barriers are a shortage of placements and the limited access trainee nurses have to skills labs for hands-on training. In teacher education, the primary challenge is the need to develop trainee teachers’ ability to reflect on their practice and raise their awareness of the in-situ pedagogical decision-making demonstrated by skilled and experienced teachers. As a highly immersive technology, VR can make a positive contribution towards overcoming these issues. However, more empirical data is required to avoid the pitfalls of technological determinism and inform more nuanced, context-specific applications.

What is ‘Virtual Reality’?

Virtual reality is not a single, easy-to-describe concept, and there is no single universal definition. VR is comprised of 360-degree images and video. 360-degree video uses a camera with multiple lenses to capture a full view of a scene. Images from each lens are then ‘stitched’ together, and presented as a coherent 360-degree environment. These are two-dimensional media projected onto a digital sphere. When the viewer is placed at the centre of this sphere, wearing a VR head-mounted display (HMD) gives the impression of being inside the simulated environment. This form of VR affords what is known in the industry as ‘three degrees of freedom’. This refers to the number of directions that the viewer can ‘move’ through three-dimensional space, which means that viewers can tilt and turn their heads left, right, up and down, and this orientation will be tracked and matched by the environment being viewed. Looking left inside a virtual room, for example, will cause the view inside the headset to display the left side of the room. Looking up reveals the ceiling, and so on. The position (but not the orientation) of the virtual viewpoint is fixed.

The level of realism and authenticity that can be achieved through using real images and video makes VR a practical and powerful tool for creating immersive, interactive learning content. VR is now relatively cheap and fast to produce using this method, as it draws upon real-world places and people to construct digital environments. It is also comparatively easy to share; in the context of education this makes it practical to take content beyond research and integrate immersive VR experiences across a faculty. However, the major drawback to this approach is the low level of user agency. There are, however, several ways in which this disadvantage can be ameliorated. We will return to this point later.

At the technological high end, we have what is often, somewhat controversially, described as ‘True VR’ which affords six degrees of freedom. This means that the headset not only tracks the orientation of the user’s gaze but also their location as they physically move through the virtual environment. This form of VR enables (relatively) free and natural movement in the virtual environment that closely replicates how we perceive the real world. Objects can be viewed from different angles and users can walk over to them, look behind them, and even pick them up with the use of hand tracking technology or handheld controllers.

In the HE context, however, the high cost and technical complexity are largely responsible for hindering the impact of this high-end form of VR on everyday teaching and learning. For these reasons, ARU has opted to use interactive 360-degree images and video rather than computer-generated VR. This aligns well with our institutional active learning strategy, and all efforts in our Faculty have been directed either towards improving teaching and learning, or researching ways to do so in the future. While interested and active in researching the higher-end configurations of VR, our priority is to democratise the technology through widespread integration into courses which will benefit our students.

VR and Learning

The examples that follow describe the use of VR in teacher training and nursing are illustrations of how it can be used for educational purposes. Consequently, these examples of applied VR are explored primarily through a pedagogical lens.

The exploratory objectives of these ongoing projects centre around three primary goals:

  1. Identify the core pedagogical affordances of VR
  2. Locate areas within the curricula that can potentially benefit from VR
  3. Use well-grounded learning design principles to build content or procedures that exploit the potential of VR

In contrast to more passive forms of media, such as text, images and video, VR can be used to create immersive, interactive simulations that provide users with a degree of choice and agency. One of the primary affordances of VR is its power to situate the student at the ontological centre of the learning experience (Gibson, 1977). The digital world, quite literally, revolves around the viewer, fully appropriating their visual, auditory and spatial perception, which generates a spatially immersive experience. The degree of sensory appropriation, in combination with the placement of the viewer as the locus of the experience, can facilitate a compelling feeling of presence (Schuemie et al., 2001).

Most people who have experienced VR report feeling a strong sense of being there, in the digital environment. While this feeling of ‘telepresence’ (Mantovani et al., 1999) is highly subjective, approaches to analysing the phenomenon typically distinguish two components from which immersion is an emergent property:

  1. Spatial presence, or the sense of being in a place
  2. Involvement, in the sense of focusing attention on the virtual environment (Shubert, 2009)

To illustrate interrelation between these factors, imagine a student in a crowded lecture theatre. The student is physically, spatially present in the environment, but with attention completely absorbed in a social media exchange on their phone. In this context, although physically present, it would be hard to claim the student is immersed in the learning environment. It is clear, therefore, that spatial presence must be combined with attentional focus in order to fully achieve the psychological state of immersion.

For those involved in the deliberate construction of digital environments, it is important to understand which aspects of physical spaces can elicit a strong sense of presence, and how attention and engagement can be intentionally designed into these environments. If immersion plays an important role in learning, the interplay between spatial presence and attentional focus would appear to be a dynamic that can, through informed design, be leveraged for training and educational purposes. Skills involving spatial understanding, observation and recall of visual information are obvious areas to which VR can make a positive impact. There is also a growing body of research supporting the use of VR to train affective skills through cognitive behavioural therapy (Botella et al., 2015; Zinzow et al., 2018), with popular examples including stress management, and treatment of post-traumatic stress disorder and depression. However, the role of immersion in learning, especially with VR, requires further study and the complex interplay between immersion, spatial presence, engagement, motivation and learning, is still being mapped.

The feeling of spatial presence, in tandem with the egocentric frame of reference, and photorealistic imagery derived from authentic settings, creates a compelling opportunity to contextualise and situate learning. The principles of situated learning theory (Brown, 1982; Lave and Wenger, 2002) frame learning as inseparable from doing. Knowledge gains meaning when it is grounded in context, developing as a dynamic relationship between an individual and his/her situation. These ideas are now fairly uncontroversial in the field of education and have profound implications for the planning, design, creation and integration of learning objects based on interactive virtual settings.

To construct more authentic, situated and active learning experiences (as opposed to experiences designed to promote the retention of information), teachers need to shift from merely distributing information to using context as a framework for actively constructing and grounding knowledge. Tasks and assessments need to be aligned with their real-world equivalents. However, intentionally designing context, especially within the physical limitations of traditional educational settings, can be extremely challenging. The contextual affordances of brick and mortar learning spaces are, usually, literally quite static. Conversely, VR offers tools that enable the sculpting of a simulated environment, but these tools need to be used in an informed way to design a contextualised cognitive and emotional experience that is as multifaceted and authentic as possible.

For both instructional and investigative purposes, the hypothetical realities created through the use of VR can provide a higher degree of ecological validity to students, educators and researchers. Traditionally, in both contexts, while the content or stimuli can be tightly controlled, the fixed constraints of the surrounding environment pre-exist. VR can help to bridge the gap between the real and the constructed, affording the opportunity to study more complex in-situ behaviour that would otherwise be logistically impractical. The advantage of improving ecological validity through the use of VR is that the insights gained may prove to be more transferable to the real world.

These are all issues to which we seek to find answers through our research and practice. The following examples foreground different aspects of this journey.

Example 1: Teacher training

In the first example of applied VR, the design complexity was minimised while retaining a high degree of authenticity. When constructing a VR scenario, there are two main design stages: the first involves the planning and filming of the scene(s) using 360-degree cameras; the second is the addition of a digital overlay. This overlay typically comprises navigation options to enable simulated movement between or within scenes and a series of ‘tags’ or ‘hotspots’ that provide additional functionality and information. Hotspots range from descriptive text and labels to spatial audio narration, images, slides, object markers, multiple choice questions and traditional ‘flat’ 2D video. The use of digital overlays can restore some of the meaningful user agency that it is typically lacking with this type of VR. They are also a primary tool for focusing attention and increasing engagement.

The project outlined below builds on an earlier pilot study (Walshe and Driver, 2018), which took the form of filming inexperienced trainee primary teachers in the classroom using 360-degree video technology and then asking them to re-watch the video using virtual reality headsets. The results of this study led to several significant observations that reinforced the idea that VR can produce highly embodied, spatially situated experiences that promote learning. For example, most of the trainee teachers felt they were re-visiting rather than just re-watching their lessons, revealing a strong feeling of presence and a shift in temporal as well as spatial perspective. Trainee teachers were also able to produce markedly more nuanced reflections on their and their students’ behaviour. However, the need to better scaffold the development of trainee teacher’s reflective practice and an opportunity to raise their awareness of in-situ pedagogical decision-making was also noted (Walshe et al., 2019). To achieve this goal, a follow-up interpretive case study was initiated involving 23 Year 3 students on the BA Primary Education Studies course. We adopted Stake’s (1995) instrumental case study approach using the examination of a particular context to facilitate wider understanding. After receiving ethical approval, written parental consent on behalf of the pupils, and verbal agreement from students, we began filming highly experienced teachers in practice.

Beginning with English and Maths lessons, we followed a similar process to that taken in the earlier pilot study. The experienced teachers were asked to ‘re-visit’ the VR captures of their lessons and verbally reflect on their teaching using ‘think-aloud protocol’ to articulate their thoughts and observations. These reflections were recorded in both audio and standard video.

We recorded a real lesson, delivered in real time, with real students, in a single space (the classroom), which streamlined the post-production workflow. The production was also considerably simplified through the use of fixed (stationary) cameras to capture the lesson. One camera was placed at desk-level in the middle of the classroom space and the other attached to the overhead projector to provide a panoptic overhead view. Each camera was controlled remotely from outside the classroom, using mobile devices to monitor live feeds, minimizing the disturbance of the classroom dynamic that our presence may have otherwise caused.

Stage 2 involved the creation of the digital overlay using the recorded audio and video clips of the experienced teachers commenting on their lessons. As these comments were extensive, they were edited down to a curated range of observations to highlight different skills. We paid particular attention to commentary that unveiled non-obvious, tacit pedagogical decisions.

The digital overlay used timed narration to synchronise comments with events taking place in the lesson. A navigation hotspot was also created to allow viewers to move from the desk-level viewpoint to the overhead panoptic view. The video was edited with the intention of increasing the salience of important moments as they unfolded in the classroom. In these instances, a circle would appear around the area of interest. The area outside this circle would darken and blur and the area inside would brighten and magnify. One such instance showed a pupil who appeared to be excluded from working in a group, before the synchronised teacher narration provided an explanation of what was actually taking place and her reasoning to allow this to happen. This may seem like a trivial piece of video editing. However, it is important to remember that a VR user wearing a headset is free to look in any direction. In such a dynamically evolving scenario as a primary school classroom, there are multiple simultaneous events competing for the viewers’ attention. With traditional ‘flat’ video, the agency to choose where to look is limited to the single rectangular frame the director has preselected. While still experimental, the use of this toolbox of techniques (e.g. zoom, blur, highlight and magnify) seems to achieve the desired effect of capturing and directing viewers’ attention so that they do not miss fleeting but significant events.

This is an ongoing project in its early stages. As such, the insights and conclusions drawn from this research will be shared in future publications. Nevertheless, there have already been several unexpected points to emerge that are of interest from both theoretical and practical perspectives.

One example was observed when a trainee teacher participant was wearing the VR headset and progressing through the classroom scenario. When she navigated to the second camera (suspended from the ceiling projector), she became visibly disconnected from the feeling of immersion. She verbally expressed surprise and discomfort at the shift in perspective. While the first camera had been providing a viewpoint of the classroom that roughly corresponded to the participant’s seated height and position, the top-down panoptic view of the suspended camera had created a jarring proprioceptive mismatch by forcing a perspective that was not coherent with her internal body schema (Lakoff and Johnson, 1999). This had the effect of breaking immersion, abruptly ‘snapping’ the participant out of the constructed reality and reminding her that she was, in fact, sitting in an empty room and engaging with a simulation.

Example 2: Nurse education

In this section, we describe how we approached the design, production and deployment of an immersive community care scenario for second-year students on the undergraduate nursing course. This core material is set in the home of an elderly service-user and is being used to assess the potential benefits of applying VR in this context.

While this is currently an academic study, the desired outcome is that this research will inform the future widespread integration of VR technology across the faculty, including nursing, midwifery and social care courses. Students highly value the time spent within our skills labs and have shown great interest in finding further opportunities to develop their practical skills before entering placements. As these skills labs are already operating at full capacity, and clinical placements are in limited supply, VR could help to bridge this gap through the creation of virtual labs and placements based on authentic locations and realistic scenarios. One of the major benefits of this approach is the scalability and repeatability afforded by VR, allowing students to revisit scenarios multiple times and proceed at their own pace.

In this mixed method study, we were investigating and evaluating the affordances of VR to support the development of empathy, compassion and decision-making skills. This differed significantly from the previous study in that we were comparing three distinct groups of learners to assess the relative benefits and drawbacks of different modes of delivery.

This scenario generated considerable technical challenges that were not encountered during the study of pedagogical decision-making. Firstly, there was an inversion of movement in the 360-degree filming. While in the teacher training context the two cameras were fixed in place, producing the effect that all movement was taking place around the viewer, in the nursing scenario the central protagonist was an elderly woman, navigating her home in an electric wheelchair. To provide the user with a perspective that revealed the world from her eye level, the stereoscopic 360-degree camera was attached to her wheelchair, raised just above head level.

In tests of early footage, we found that people quickly began to feel signs of motion sickness. This is a common symptom of VR exposure (Allison et al., 2001). This was due, at least in part, to the fact that we had introduced two simultaneous and conflicting levels of motion. Viewers wearing the HMD were already introducing movement to the video, by turning their heads to look around. By adding the movement of the wheelchair to this, we were creating the perfect recipe for nausea by mismatching input from the user’s visual system and their vestibular system (responsible for spatial orientation and sense of balance). By applying a series of counter-measures, such as slowing down the wheelchair, making changes to the video, and modifying the instructions for participants, we succeeded in cancelling this effect.

One group interacted with the immersive scenario using an untethered head-mounted VR display with spatial audio. A second group received the same content but embedded within the Virtual Learning Environment (VLE) using a desktop computer, mouse, and monitor. The embedded content is identical to that in the first group, retaining all the interactivity provided by the digital overlay; however, students control the viewpoint by clicking and dragging the mouse to explore and navigate through the environment. A third group received a ‘typical’ non-immersive version of the same scenario, constructed using text, images, and other commonly used tools available within the VLE.

While every effort was made to ensure that all three groups encountered the same information, the way in which they consumed and interacted with the media was qualitatively very different. As anticipated, descending levels of immersion and feelings of presence occurred between those using the VR headset and those with more ‘exocentric’ or ‘outside-in’ viewpoints, showing that they were not present in the constructed reality. We were also interested in analysing the effects of presence, and the use of the virtual environment as a mnemonic device to support the transfer of information encountered within the scenario into long-term memory. As with the previous example, the results and insights gained from this ongoing research will be disseminated in future publications.


The concept of embodiment has a long history in philosophical thought, especially in the work of thinkers in the phenomenological tradition, such as Heidegger (1962) and Merleau-Ponty (2002), who emphasise the body as the locus of identity and highlight the centrality of sensory experience and perception in how we engage with the world. More recently, in cognitive psychology, the theory of embodied cognition (Lakoff and Johnson, 1999; Bergen, 2012) has begun to redefine our understanding of how we process information. This challenges Cartesian views and computational theories of the mind, that have long dominated traditional cognitive science and informed educational theory, to the point that it is difficult to even speak of theory of mind without recourse to ontological metaphors derived from technology, such as ‘process information’.

It is interesting that computer scientists, Human-Computer Interaction designers, and professionals from many other technology-related industries are now drawing upon the ideas of the above thinkers to inform their work. Dourish (2001), for example, emphasises the need to understand skilled, engaged practice and to incorporate social understanding into the design of better interactive systems that connect with the settings in which they are embedded. He distinguishes between ‘inhabited interaction’ in the world and ‘disconnected observation and control’ (2001: 102). This distinction can also be found in the work of Heidegger, who describes two modes in which we interact with objects in the world: Zuhandenheit (ready-to-hand) and Vorhandenheit (present-at-hand). When objects are ready-to-hand, we relate to them on a practical level, using them to achieve our goals seamlessly, as an extension of ourselves. In contrast, when we relate to objects in the present-at-hand mode, we contemplate them, aware of their separation from us. These modes can flow backwards and forwards when, for example, a mouse we are using to move the cursor on a screen suddenly stops working. Just moments earlier, as we worked, we were not even consciously aware of the mouse; it felt like an extension of our arms or eyes. However, once it ceases to work we stop to look at it, pick it up and rotate it to look for blockages or check if the batteries need changing.

These somewhat abstract ideas become useful when we are attempting to construct a coherent digital space that evokes feelings of presence and immersion. Constructing VR environments, whether these are computer generated or built around 360-degree video of authentic locations, requires the deliberate production and manipulation of space (Lefebvre, 1974) to create a designed experience that provides a particular representation of reality. This representation can quickly lose integrity when we cause a user to shift modes of interaction with the virtual world.

The risk of breaking immersion is further increased with the addition of a digital overlay. The floating hotspots that appear in the virtual scenarios are clearly not part of the original scenes, they are designed to stand out from the background. They look ‘digital’ – visually distinct from the authentic video layer. They prompt interaction by rotating and making a ‘pop’ sound as they appear to draw the attention of viewers. The use of the overlay was an initial cause of concern, precisely due to the danger that its obvious artificiality would cause a jarring breakdown in the aural ambience of the VR scenarios. However, both in testing and in practice, this proved not to be the case. Viewers appear to accept the overlay as a semantic interface to the experience without question or a shift in their mode of engagement with the VR world. Curiously, only when a hotspot failed to work as expected during testing, did we notice a clear breakdown in immersion and presence. This is a phenomenon that clearly requires further exploration.


It would appear that the sense of presence and immersion is quite robust within VR environments, to the point that it is possible to interrupt cognitive flow (Csikszentmihalyi, 2014) without jarring the viewer/user into a mode in which the experience becomes perceived as present-at-hand. This was explored in our early study on trainee teacher reflection and is also a current theme in our follow-up study on in-situ pedagogical decision-making. This point is especially interesting as other studies on VR have highlighted the experience of flow as a strong predictor of empathy and embodiment (Shin, 2017). The service-user scenario for nurse education applies the use of intentional flow-breaking as an integral part of the learning design. Hotspots were used to interrupt the scenario, freezing the 360-degree video while participants answered multiple-choice questions. In both cases, the interruption of flow was used as a tool for prompting participants to consciously contemplate events, objects, and people in the VR world, at moments identified as key opportunities for active, situated learning. These insights and the results of these ongoing studies will guide decisions regarding the identification of future opportunities to improve learning through VR. They will also inform the theoretical understanding of the affordances of VR, the learning design principles that underpin their educational application and the technologies and production methods used in their creation.

Further exploration of the limitations of this form of VR is required, including issues surrounding accessibility, for example, and the absence of tactile feedback. We also need to more fully understand the role of sound, especially spatial audio (Cohen, 2015), as a tool for designing 360-degree soundscapes that match the visuals in order to increase the feeling of immersion. More broadly, it will be useful to further investigate the role of narrative and gameful design (Aguilar et al., 2018) as tools for the deliberate sequencing of events to scaffold learning and create engagement.


Aguilar, S.J., Holman, C. and Fishman, B.J. (2018) Game-Inspired Design: Empirical Evidence in Support of Gameful Learning Environments. Games and Culture, 13 (1), 44–70. Online. https://doi.org/10.1177/1555412015600305 (accessed 10 April 2019).

Allison, R.S., Harris, L.R., Jenkin, M., Jasiobedzka, U. and Zacher, J.E. (2001) Tolerance of temporal delay in virtual environments, 247–54. Online. https://doi.org/10.1109/VR.2001.913793 (accessed 10 April 2019).

Bergen, B.K. (2012) Louder than Words: The new science of how the mind makes meaning. Basic Books (AZ).

Botella, C., Serrano, B., Baños, R.M. and Garcia-Palacios, A. (2015) ‘Virtual Reality Exposure-based Therapy for the Treatment of Post-traumatic Stress Disorder: A Review of Its Efficacy, the Adequacy of the Treatment Protocol, and Its Acceptability’. Neuropsychiatric Disease and Treatment, 11 (2015): 2533–45.

Brown, J., Collins, A. and Duguid, P. (1989) ‘Situated Cognition and the Culture of Learning’. Educational Researcher, 18 (1), 32–42.

Cohen, M., Villegas, J. and Barfield, W. (2015) ‘Special issue on spatial sound in virtual, augmented, and mixed-reality environments’. Virtual Reality, 19 (3), 147–8. Online. https://doi.org/10.1007/s10055-015-0279-z (accessed 10 April 2019).

Csikszentmihalyi, M. (2014) Flow and the foundations of positive psychology: The collected works of Mihaly Csikszentmihalyi. London: Springer.

Dourish, P. (2001) Where the Action is: The foundations of embodied interaction. Cambridge, MA: MIT Press.

Gibson, J.J. (1977) ‘The Theory of Affordances’. In R. Shaw and J. Bransford (eds) Perceiving, Acting, and Knowing. New Jersey: Lawrence Erlbaum Associates, 127–43.

Heidegger, M. (1962) Being and Time. New York: Harper and Row.

Lakoff, G. and Johnson, M. (1999) Philosophy in the Flesh: The embodied mind and its challenge to western thought. New York, NY: Basic Books.

Laurell, C., Sandström, C., Berthold, A. and Larsson, D. (2019) ‘Exploring barriers to adoption of Virtual Reality through Social Media Analytics and Machine Learning – An assessment of technology, network, price and trialability’. Journal of Business Research. Online. https://doi.org/10.1016/j.jbusres.2019.01.017 (accessed 10 April 2019).

Lave, J. and Wenger, E. (2002) Situated Learning: Legitimate peripheral participation. Cambridge: Cambridge University Press.

Mantovani, R. and Mantovani, G. (1999) ‘‘Real’ presence – How different ontologies generate different criteria for presence, telepresence, and virtual presence. Presence’. Teleoperators and Virtual Environments, 8 (5), 540–50.

Merleau-Ponty, M. (2002) Phenomenology of Perception, London: Routledge Classics.

Orbis Research (2018) ‘Global Virtual Reality Market-Segmented by Product Type (Hand-held Devices, Gesture-controlled Devices, HMD)’, VR Technology, Applications, and Region-Growth, Trends, and Forecast (2018–2023). Online. https://www.orbisresearch.com/reports/index/global-virtual-reality-market-segmented-by-product-type-hand-held-devices-gesture-controlled-devices-hmd-vr-technology-applications-and-region-growth-trends-and-forecast-2018–2023 (accessed 10 January 2019).

Shin, D. (2018) ‘Empathy and embodied experience in virtual environment: To what extent can virtual reality stimulate empathy and embodied experience?’ Computers in Human Behavior, 78, 64–73. Online. https://doi.org/10.1016/j.chb.2017.09.012 (accessed 10 April 2019).

Schubert, T. (2009) ‘A New Conception of Spatial Presence: Once again, with feeling’. Communication Theory, 19 (2), 161–87.

Schuemie, M., Van der Straaten, P., Krijn, M. and Van der Mast, C. (2001) ‘Research on presence in virtual reality: A survey’. Cyberpsychology and Behavior: The Impact of the Internet, Multimedia and Virtual Reality on Behavior and Society, 4 (2), 183–201.

Vygotsky, L.S. (1978) Mind in Society: The development of higher psychological processes. Cambridge, MA: Harvard University Press.

Walshe, N., Driver, P., Jakes, T. and Winstanley, J. (2019) ‘Developing trainee teacher understanding of pedagogical content knowledge using 360-degree video and an interactive digital overlay’, Impact: Journal of the Chartered College of Teaching.

Walshe N. and Driver P. (2018) ‘Developing reflective trainee teacher practice with 360-degree video’. Teaching and Teacher Education, 78, 97–105. Online. https://doi.org/10.1016/j.tate.2018.11.009 (accessed 10 April 2019).

Zinzow, H.M., Brooks, J.O., Rosopa, P.J., Jeffirs, S., Jenkins, C., Seeanner, J., McKeeman, A. and Hodges, L.F. (2018) ‘Virtual Reality and Cognitive-Behavioral Therapy for Driving Anxiety and Aggression in Veterans: A Pilot Study’. Cognitive and Behavioral Practice, 25 (2), 296–309. Online. https://doi.org/10.1016/j.cbpra.2017.09.002 (accessed 10 April 2019).


Icon for the Creative Commons Attribution 4.0 International License

Innovations in Active Learning in Higher Education Copyright © 2020 by Paul Driver; Nicola Walshe; Siân Shaw; and Suzanne Hughes is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

Share This Book