Thorpe, A., Friedman, J., Evans, S., Nesbitt, K., & Eidels, A. (2022). Mouse Movement Trajectories as an Indicator of Cognitive Workload. International Journal of Human-Computer Interaction, 38(15), 1464–1479.
Abstract: Assessing the cognitive impact of user interfaces is a shared focus of human-computer interaction researchers and cognitive scientists. Methods of cognitive assessment based on data derived from the system itself, rather than external apparatus, have the potential to be applied in a range of scenarios. The current study applied methods of analyzing kinematics to mouse movements in a computer-based task, alongside the detection response task, a standard workload measure. Sixty-five participants completed a task in which stationary stimuli were tar;geted using a mouse, with a within-subjects factor of task workload based on the number of targets to be hovered over with the mouse (one/two), and a between-subjects factor based on whether both targets (exhaustive) or just one target (minimum-time) needed to be hovered over to complete a trial when two targets were presented. Mouse movement onset times were slower and mouse movement trajectories exhibited more submovements when two targets were presented, than when one target was presented. Responses to the detection response task were also slower in this condition, indicating higher cognitive workload. However, these differences were only found for participants in the exhaustive condition, suggesting those in the minimum-time condition were not affected by the presence of the second target. Mouse movement trajectory results agreed with other measures of workload and task performance. Our findings suggest this analysis can be applied to workload assessments in real-world scenarios.
|
Wilf, M., Korakin, A., Bahat, Y., Koren, O., Galor, N., Dagan, O., et al. (2024). Using virtual reality-based neurocognitive testing and eye tracking to study naturalistic cognitive-motor performance. Neuropsychologia, 194, 108744.
Abstract: Natural human behavior arises from continuous interactions between the cognitive and motor domains. However, assessments of cognitive abilities are typically conducted using pen and paper tests, i.e., in isolation from “real life” cognitive-motor behavior and in artificial contexts. In the current study, we aimed to assess cognitive-motor task performance in a more naturalistic setting while recording multiple motor and eye tracking signals. Specifically, we aimed to (i) delineate the contribution of cognitive and motor components to overall task performance and (ii) probe for a link between cognitive-motor performance and pupil size. To that end, we used a virtual reality (VR) adaptation of a well-established neurocognitive test for executive functions, the 'Color Trails Test' (CTT). The VR-CTT involves performing 3D reaching movements to follow a trail of numbered targets. To tease apart the cognitive and motor components of task performance, we included two additional conditions: a condition where participants only used their eyes to perform the CTT task (using an eye tracking device), incurring reduced motor demands, and a condition where participants manually tracked visually-cued targets without numbers on them, incurring reduced cognitive demands. Our results from a group of 30 older adults (>65) showed that reducing cognitive demands shortened completion times more extensively than reducing motor demands. Conditions with higher cognitive demands had longer target search time, as well as decreased movement execution velocity and head-hand coordination. We found larger pupil sizes in the more cognitively demanding conditions, and an inverse correlation between pupil size and completion times across individuals in all task conditions. Lastly, we found a possible link between VR-CTT performance measures and clinical signatures of participants (fallers versus non-fallers). In summary, performance and pupil parameters were mainly dependent on task cognitive load, while maintaining systematic interindividual differences. We suggest that this paradigm opens the possibility for more detailed profiling of individual cognitive-motor performance capabilities in older adults and other at-risk populations.
|
Zacks, O., & Friedman, J. (2020). Analogies can speed up the motor learning process. Sci Rep, 10(1), 6932.
Abstract: Analogies have been shown to improve motor learning in various tasks and settings. In this study we tested whether applying analogies can shorten the motor learning process and induce insight and skill improvement in tasks that usually demand many hours of practice. Kinematic measures were used to quantify participant's skill and learning dynamics. For this purpose, we used a drawing task, in which subjects drew lines to connect dots, and a mirror game, in which subjects tracked a moving stimulus. After establishing a baseline, subjects were given an analogy, explicit instructions or no further instruction. We compared their improvement in skill (quantified by coarticulation or smoothness), accuracy and movement duration. Subjects in the analogy and explicit groups improved their coarticulation in the target task, while significant differences were found in the mirror game only at a slow movement frequency between analogy and controls.We conclude that a verbal analogy can be a useful tool for rapidly changing motor kinematics and movement strategy in some circumstances, although in the tasks selected it did not produce better performance in most measurements than explicit guidance. Furthermore, we observed that different movement facets may improve independently from others, and may be selectively affected by verbal instructions. These results suggest an important role for the type of instruction in motor learning.
|
Zopf, R., Friedman, J., & Williams, M. A. (2015). The plausibility of visual information for hand ownership modulates multisensory synchrony perception. Experimental Brain Research, 233(8), 2311–2321.
Abstract: We are frequently changing the position of our bodies and body parts within complex environments. How does the brain keep track of one’s own body? Current models of body ownership state that visual body ownership cues such as viewed object form and orientation are combined with multisensory information to correctly identify one’s own body, estimate its current location and evoke an experience of body ownership. Within this framework, it may be possible that the brain relies on a separate perceptual analysis of body ownership cues (e.g. form, orientation, multisensory synchrony). Alternatively, these cues may interact in earlier stages of perceptual processing—visually derived body form and orientation cues may, for example, directly modulate temporal synchrony perception. The aim of the present study was to distinguish between these two alternatives. We employed a virtual hand set-up and psychophysical methods. In a two-interval force-choice task, participants were asked to detect temporal delays between executed index finger movements and observed movements. We found that body-specifying cues interact in perceptual processing. Specifically, we show that plausible visual information (both form and orientation) for one’s own body led to significantly better detection performance for small multisensory asynchronies compared to implausible visual information. We suggest that this perceptual modulation when visual information plausible for one’s own body is present is a consequence of body-specific sensory predictions.
|
Zopf, R., Truong, S., Finkbeiner, M., Friedman, J., & Williams, M. A. (2011). Viewing and feeling touch modulates hand position for reaching. Neuropsychologia, 49(5), 1287–1293.
Abstract: Action requires knowledge of our body location in space. Here we asked if interactions with the external world prior to a reaching action influence how visual location information is used. We investigated if the temporal synchrony between viewing and feeling touch modulates the integration of visual and proprioceptive body location information for action. We manipulated the synchrony between viewing and feeling touch in the Rubber Hand Illusion paradigm prior to participants performing a ballistic reaching task to a visually specified target. When synchronous touch was given, reaching trajectories were significantly shifted compared to asynchronous touch. The direction of this shift suggests that touch influences the encoding of hand position for action. On the basis of this data and previous findings, we propose that the brain uses correlated cues from passive touch and vision to update its own position for action and experience of self-location.
|