Link Foundation Fellowships Newsletter

Inside this Issue

Features

Meet this Year's Fellowship Recipients

Link Fellowship Awardees for 2021- 2022

Modeling, Simulation and Training

FIRST YEAR FELLOWS

Jesse Cossitt

Name: Jessie Cossitt
School: Mississippi State University
Project: Dynamic Task Allocation and Understanding of Situation Awareness Under Different Levels of Autonomy in Closed-Hatch Miliatary Vehicles Using Virutal Reality
Research Advisor: Dr. Cindy Bethel

 

This research uses a virtual environment to investigate solutions for the need to better understand how humans and autonomous vehicles work together to provide the best-case scenario for the successful completion of military missions and utilization of autonomous capabilities to decrease necessary crew sizes. Previous attempts to determine optimal allocation of tasks to teams reduced from three members to two members have performed static allocation. This is not an ideal method although it is easily implemented. A more challenging alternative, due to the ever-changing nature of mission environments, is to optimize task allocation to crew members using a real-time, dynamic process. Optimized task allocation may enhance task performance, decrease cognitive workload, and improve situation awareness for crew members. This will result in reduction of labor and costs associated with mission performance. This research aims to dynamically optimize task allocation in a virtual reality environment by first establishing an understanding of how cognitive load, situation awareness, and task performance are affected by different levels of vehicle autonomy and frequency and complexity of secondary tasks. The resulting system could be applied to real mission scenarios and used to train soldiers to operate the autonomous systems in virtual reality prior to missions.

 

Ganesh Pai Mangalore

Name: Ganesh Pai Mangalore
School: University of Massachusetts Amherst
Project: Evaluating Mixed Reality Training for Calibrating Operators’ Mental Models of Advanced Driver Assistance Systems
Research Advisor: Dr. Anuj K. Pradhan

 

Advanced driver assistance systems (ADAS) promise safety benefits, but inappropriate use by uninformed users can negate these benefits. Research indicates that benefits are maximized if users have accurate and complete mental models about these systems, i.e., possess knowledge about the functions and limitations of these systems. Since mental models are vague concepts, there are no efficient methods to develop accurate mental models and promote user knowledge. This project considers the potential approach of using a Virtual Reality (VR) platform to shape mental models. The project will leverage outcomes from our past research on VR and mentals models to conceptualize, develop, and evaluate a VR training program to shape drivers’ mental models of ADAS technologies. The effects of this training program will also be experimentally evaluated on an advanced driving simulator. The findings from this project could have significant translational impact on vehicle system and user interface design, driver education and licensing, and law enforcement policies.

 

Mike Salvato

Name: Mike Salvato
School: Stanford University
Project: Predicting hand-object interaction for improved haptic feedback in simulated environments
Research Advisor: Dr. Allison Okamura

 

Compelling virtual and augmented reality experiences with haptic feedback require that virtual object interaction occurs smoothly and with minimal latency. While low-latency hand tracking methods exist, hand reconstruction errors, especially as exacerbated by errors such as blurring or occlusion, result in timing errors for human-object interaction that reduce realism and create force feedback instability. These issues highlight the need to consider novel approaches to tracking for interaction, to accurately determine interaction timing and hand movement. To address this challenge, we propose an interaction-expectation model. This model will use the history of hand poses to predict hand-object interaction before it begins. Rather than focusing only on tracking accuracy, this model assumes that the change in hand pose over time encodes information about when the human expects object interaction to begin. The interaction-expectation model would augment hand tracking systems by providing predictions of future hand poses and timing of object contact. These predictions can reduce object interaction latency to improve haptic feedback timing. Furthermore, by using future hand poses, tracking systems would be less sensitive to local tracking error, occlusions, and motion blur. This work could improve object interaction in virtual and augmented reality for training, education, remote social interactions, and entertainment.

 


SECOND YEAR FELLOWS

Ali Ebrahimi

Name: Ali Ebrahimi
School: Johns Hopkins University
Project: Design and simulation of intelligent control algorithms for bimanual robot-assisted retinal surgery training system
Research Advisor: Dr. Iulian Iordachita

Retinal microsurgery remains one of the most demanding surgical procedures, involving ultra-fine vein manipulations. In such intricate procedures, which are performed bimanually using two surgical instruments, surgeon hand tremor may cause severe injuries to the eye. Advancements in robotic assistance for eye surgery (Steady-Hand Eye Robots developed at the Johns Hopkins University) have proved beneficial in reducing hand tremor by providing steady and robust surgical tool manipulation. However, sufficient sensing capabilities and smart control methods should be integrated with the robots to ensure their safe performance in the confined area of the eye. In order to enable safe bimanual robot-assisted eye surgery, we first design and simulate hybrid force/position control algorithms by considering various safety aspects for manipulation of two robots inside the eye. We will then develop hardware and software infrastructures for obtaining a bimanual system of two robots and build smart multi-function surgical instruments to boost robots sensing capabilities. We will implement the designed control strategies on the developed system and train clinicians in obtaining intuitive skills for bimanual robot-assisted eye surgery. We anticipate that utilizing multi-function force-sensing tools in conjunction with two cooperative robots for eye surgery could enable safe, precise, and semi-autonomous retinal surgeries.

 

Julia Juliano

Name: Julia Juliano
School: University Southern California
Project: Neural mechanisms of head-mounted display virtual reality motor learning and transfer to the real world
Research Advisor: Dr. Sook Lei Liew

 

The use of head-mounted display virtual reality (HMD-VR) in motor rehabilitation has been growing exponentially over recent years. Motor rehabilitation interventions using HMD-VR are only effective when the motor skills learned in HMD-VR transfer to the real world. However, there is conflicting research suggesting that motor skills learned in an HMD-VR environment may or may not transfer from the immersive virtual environment to the real world. What is lacking is a clear explanation or potential mechanism for why HMD-VR motor transfer occurs in some cases but not others. Without such information, HMD-VR cannot be harnessed effectively to promote motor rehabilitation for clinical populations, such as individuals after stroke.

The purpose of this dissertation project is to identify neural mechanisms involved in the transfer of HMD-VR motor learning to the real world and to examine whether manipulating these neural correlates could facilitate HMD-VR motor transfer. The results of this project are expected to have an important positive impact because they will provide specific neural targets to improve transfer of motor learning in HMD-VR to the real world and provide basic science to guide the designs of future emerging technology applications.

 


 

If you would like to find out more about our Link Foundation Modeling, Simulation and Training Fellows and projects that have been funded in the field of Modeling, Simulation and Training by the Link Foundation, please visit the Link Modeling, Simulation and Training webpage at http://www.linksim.org/.