Name: Ehsan Azimi
School: Johns Hopkins University
Project: Interactive Ecosystem for Surgical Training in a Realistic Mixed-Reality
Research Advisor: Dr. Peter Kazanzides
Many bedside procedures rely on the visuospatial skills of the surgeon to estimate the location of an anatomical target using external landmarks. There is an uncertainty associated with this estimation, which can cause an undesired outcome. One example of such procedure is ventriculostomy which is done frequently in neurosurgery. It involves inserting a catheter into the patient's skull to alleviate the pressure inside the brain by diverting the cerebrospinal fluid.
To address this problem, in this project, we will assess use of an interactive augmented reality application for surgical training and practice in an immersive environment. It will provide real-time feedback to the user while performing the procedure. After implementation of the application, we will conduct a controlled multiuser study with medical residents to evaluate the benefits of the developed system. The work will have a broad impact, as it would be among the first studies to show the benefit of an augmented reality HMD for surgical practice and training. Considering the potential improvements that the proposed platform can introduce in the clinical outcome of the procedures, this effort can become a precursor to commercial deployment of such systems.
Name: Sarah O'Meara
School: University of California, Davis
Project: Virtual Human-Robotics Integration Testbed for Evaluating User Control Methods and Training for a Supernumerary Robotic Limb
Research Advisor: Dr. Stephen Robinson
This project consists of developing a simulation containing a robotic arm, work environment, and an interface for control methods in order to build a virtual Human-Robotics Integration (HRI) testbed to evaluate neural control methods for a user to operate an assistive robotic arm while their other two natural arms are busy. Controlling the multitude of independent objects that are required for a manual task and/or object manipulation is an operational challenge in complex work environments, such as those experienced by astronauts, surgeons, underwater operations, etc. One possible solution is the enhancement of the person’s capabilities through supernumerary robotic limbs. The ability to control a supernumerary robotic limb while simultaneously using both natural arms for tasks has not been well-studied. Therefore, development of a virtual HRI testbed will be invaluable in assessing control methods, evaluating human-robot performance, and use as a training tool.
Human-robot performance will be assessed by metrics such as task completion time, training time, learning rate, trust, workload, and situational awareness, while also evaluating the efficacy of control methods and training strategies. This research will yield valuable insights on the control and use of simulated supernumerary robotic limbs and direction for future work to enhance people’s capabilities.
Name: Trenton Wirth
School: Brown University
Project: Modeling Self-Organization in Human Crowds
Research Advisor: Dr. William Warren
The purpose of this dissertation project is to explore the interaction rules that best describe how the individual perceptually-based behavior within a crowd gives rise to the self-organized pattern of crowdbehavior. Self-organization can be defined as complex pattern formation that is generated from the local interactions without direction of a central controller.
This research project will utilize a three-pronged approach of experimentation, modeling & simulation, and real-world crowd validation to assess the Virtual Environment Navigation Lab’s (VENLab) established crowd model. The experiments will use virtual reality to ask specific questions regarding an individual’s behavior. After each experiment, we will then model the data to see how well our model might account for the observed behaviors or find areas where our model breaks. If the model cannot account for the behavior, we will add to the model based on the empirical observations. Other tools such as simulation and real-world crowd comparison will be used to further validate the real-world applicability of our model.
Once completed, we will have a better understanding of how crowd behavior is self-organized. This research will then be shared with diverse audiences, contributing toward a larger conversation around collective behavior, evacuation safety, and architectural design.
Name: Christopher Yang
School: Johns Hopkins University
Project: Investigating the computations underlying complex motor skill learning
Research Advisor: Dr. Adrian Haith
Every day humans are faced with learning new motor tasks. Some tasks can be learned by adapting an existing skill in service of a new skill (e.g., learning to play the ukulele after one already knows how to play the guitar). This learning mechanism, called adaptation, has been heavily studied by the motor learning field, but recent work suggests that the scope of tasks which can be learned in this way is rather limited. Instead, most motor tasks must be learned de novo, or creating a new skill from scratch (e.g., learning to play the guitar for the first time). As of yet, de novo learning is still poorly understood, and my project is focused on understanding how this learning mechanism alters one’s motor control capabilities. To do so, I have human participants learn new motor tasks in the lab and model how their sensorimotor systems change with practice. This work will help to lay the foundational knowledge necessary for understanding how people learn complex real-world tasks, like piloting a plane or performing surgery. This work may also inform us about how to design more effective training methods for such complex tasks.
If you would like to find out more about our Link Foundation Modeling, Simulation and Training Fellows and projects that have been funded in the field of Modeling, Simulation and Training by the Link Foundation, please visit the Link Modeling, Simulation and Training webpage at http://www.linksim.org/.