October 2004

Link Foundation Home Page

Newsletter Home Page

Inside this Issue


Welcome to our 2nd Edition, Lee Lynd
"Hooked on Earth Systems," Christopher Yang, Former Link Fellowship Recipient
"From the Chair of the Board," David M. Gouldin
Donor Listing
News Updates

Energy (2004-2006)
Simulation and Training (2004-2005)
Ocean Engineering (2004-2005)
Jernej Barbic

University/Department: Carnegie Mellon University, Computer Graphics Lab
Advisor: Professor Doug James
Contact: barbic@cs.cmu.edu
Title: A New Approach to Nonlinear Simulations of Deformable Objects

The goal of the project is to develop a new technique for real-time 3D interactive simulations of deformable bodies, for use in medical surgery simulators, computer-generated movie industry and computer games.

Physically realistic simulation of deformable objects is hard since the underlying equations of physics are computationally very demanding. For larger models, it is simply infeasible to solve the equations directly in real-time, as required by an interactive application. Many simulators in practice use linear simplifications of equations, at the expense of introducing large visible artifacts. Our technique will support nonlinear laws of physics, and should be faster than existing methods for many interesting cases. It should scale well with complexity of the geometry of deformable objects, supporting, for example, an interactive simulation of a 3D model with large tetrahedral meshes.

We are also working on real-time collision detection for deformable bodies. In the literature, several methods exist on collision detection for rigid bodies, but very few for the harder problem of real-time
collision detection for deformable bodies.

Cali Michael Fidopiastis

University/Department: Modeling and Simulation, University of Central Florida
Research Advisor: Dr. Jannick Rolland
Research Co-Advisor: Dr. Peter Kincaid
Contact: cali@odalab.ucf.edu

Investigation of Egocentric and Exocentric Distance Perception in Virtual Environments: Application to Enhance Transfer of Training in Multimodal VEs

The focus of my work centers on creating simulation based cognitive and physical rehabilitation training applications for persons with traumatic brain injury. One obstacle in designing training scenarios for this clinical population is the inaccurate perception of depth within virtual environments (VEs). Inaccurate depth perception may hinder transfer of training from the VE to real world activities such as cooking or driving. The current work employs a systems approach to resolve this issue. Drawing on current research from the fields of head-mounted displays (HMDs), computer graphics, and human perception, we created a VE test bed to assess task performance in head-mounted displays for both near and far field tasks.

The first part of my research determines if ego and exocentric pointing precision, used most in perception-action experiments, can accurately quantify depth errors in VE. In addition to these measures, I will also quantify depth perception between tasks performed in the real and virtual space as a function of the HMD parameters such as resolution, light throughput, and field of view. For these initial experiments, healthy participants will perform the near field or far field tasks. The chosen near field task is relevant to medical training whereby medical professionals will determine the spatial location of various anatomical landmarks. In the far field task, participants will choose among distant objects in a shooting task.

The second part of my research explores the use of sound and haptics in addition to visual depth cues. For example, sound cues may assist in recovering depth perception degraded by the limited visual acuity and field of view of the HMD. Haptic cues may also play a similar role within some training scenarios.

The overall results of my research will show
· Optimal system configurations for training near and far field tasks
· Greater understanding of ego and exocentric pointing as a measure of depth error
· Visual, auditory, and haptic cues important for near and far field tasks
· Improved knowledge on perceptual cues necessary for positive transfer of training
The feasibility of applying this experimental methodology to TBI rehabilitation

Dahai Guo

University/Department: Department of Computer Engineering, University of Central Florida
Research Advisor: Dr. Harold I. Klee
Contact: dgu@bruce.engr.ucf.edu

Title: Creating Geo-Specific Road Databases for a Real-time Interactive Driving Simulator

My research topic is creating geo-specific road databases from aerial photos for use in driving simulation applications. Driving simulators have been used for research in areas such as safety, human factors and driver training. Creating realistic road databases is one of the challenges faced by driving simulator manufacturers. The task becomes more difficult when geo-specific road databases are needed because it is very labor intensive to manually reconstruct irregular shaped roads and medians.

Even though some commercial software applications are available to make road modeling easier, the workload is still very large. In this case, some degree of automation is definitely desirable. Due to the emergence of high-resolution aerial photos, image processing and pattern recognition techniques can be applied to developing geo-specific road databases. However, road areas comprise a small portion of the aerial photos and all non-road objects are effectively noises in terms of road detection.

In my research, the Digital Line Graph (DLG) from the United States Geographical Survey (USGS) has been used to narrow down the searching space for roads in images. USGS DLG data are collections of road center line information. Even though USGS DLG data can significantly relieve the difficulty of road detection, other objects within road areas, like traffic, shadows and pavement discontinuity can still cause obstacles against accurate road delineation. In my research, I have used and will use many image processing and computer vision techniques to solve this problem. My research will contribute to accurate road delineation for not only geo-specific road modeling, but all possible fields, like Geographical Information Systems (GIS).

Cristian J. Luciano

University/Department: Electronic Visualization Laboratory/Industrial Virtual Reality Institute, University of Illinois at Chicago
Advisors: Prof. Tom De Fanti and Prof. Pat Banerjee
Contact: clucia1@uic.edu

Title: Haptics-Based Dental Procedure Training Simulator

Currently in the Colleges of Dentistry, instructors demonstrate dental procedures and students repeat these procedures on practice manikins. Next, students perform the procedures on live patients. This time-consuming teaching process requires much one-on-one instructor/student interaction. My research project seeks to integrate haptics and 3D visualization into the dental classroom. The proposed augmented virtual reality simulation system will aid in the development of skills in using a set of different virtual dental instruments to teach students to evaluate haptic textural differentiation of "normal" and "abnormal" situations by feeling the texture of the virtual teeth and bones as well as the surrounding gingival tissues. Since periodontal procedures are basically tactile in nature, the students will depend entirely upon tactile sensation to learn what is being felt in real scenarios. Using the simulator will decrease the need to practice these procedures on manikins, animals and patients, speeding up the training in a more interactive and appealing manner.

The bottlenecks of most haptic applications are the collision detections and the determination of penetration depths between colliding virtual 3D objects to compute reaction forces. Usually, haptic applications provide point-object collision detections, so virtual objects can only be touched with the tip of the probe. For the proposed dental simulator, object-object collision detection algorithms will be needed to simulate contacts between the complex-shaped dental instruments and the mouth components. Under these circumstances, the implementation of realistic haptic applications is very challenging, mainly because of the required high update rates (above 1 KHz).

Due to the intricate shape and stiffness of dental instruments and teeth, a stand-alone PC is currently unable to satisfy the high requirements of this kind of haptic application. For that reason, I plan to investigate how parallel collision detection algorithms could be implemented on a distributed environment, taking advantage of the concurrent computing power of a PC cluster.