November 16, 2024
overcast clouds Clouds 44 °F

Binghamton computer scientists program robotic seeing-eye dog to guide the visually impaired

Computer science faculty and students train robot to respond to tugs on leash

Associate Professor of Computer Science Shiqi Zhang and his students have programmed a robot guide dog to assist the visually impaired. The robot responds to tugs on its leash. Associate Professor of Computer Science Shiqi Zhang and his students have programmed a robot guide dog to assist the visually impaired. The robot responds to tugs on its leash.
Associate Professor of Computer Science Shiqi Zhang and his students have programmed a robot guide dog to assist the visually impaired. The robot responds to tugs on its leash.

Last year, the Computer Science Department at the Thomas J. Watson College of Engineering and Applied Science went trick-or-treating with a quadruped robotic dog. This year, they are using the robot for something that Assistant Professor Shiqi Zhang calls “much more important” than handing out candy, as fun as that can be.

Zhang and PhD student David DeFazio and junior Eisuke Hirota have been working on a robotic seeing-eye dog to increase accessibility for visually impaired people. They presented a demonstration in which the robot dog led a person around a lab hallway, confidently and carefully responding to directive input.

Zhang explained some of the reasoning behind starting the project.

“We were surprised that throughout the visually impaired and blind communities, so few of them are able to use a real seeing-eye dog for their whole life. We checked the statistics, and only 2% of them are able to do that,” he said.

Some of the reasons for this deficiency are that real seeing-eye dogs cost about $50,000 and take two to three years to train. Only about 50% of the dogs graduate from their training and go on to serve visually impaired people. Seeing-eye robot dogs present a potentially significant improvement in cost, efficiency and accessibility.

This is one of the early attempts at developing a seeing-eye robot following the development and cost decrease of quadruped technology. After working for about a year, the team managed to develop a unique leash-tugging interface to implement through reinforcement learning.

“In about 10 hours of training, these robots are able to move around, navigating the indoor environment, guiding people, avoiding obstacles, and at the same time, being able to detect the tugs,” Zhang said.

The tugging interface allows the user to pull the robot in a certain direction at an intersection in a hallway, prompting the robot to turn in response. While the robot shows promise, DeFazio said that further research and development are needed before the technology is ready for certain environments.

“Our next step is to add a natural language interface. So ideally, I could have a conversation with the robot based on the situation to get some help,” he said. “Also, intelligent disobedience is an important capability. For example, if I’m visually impaired and I tell the robot dog to walk into traffic, we would want the robot to understand that. We should disregard what the human wants in that situation. Those are some future directions we’re looking into.”

The team has been in contact with the Syracuse chapter of the National Federation of the Blind in order to get direct and valuable feedback from members of the visually impaired community. DeFazio thinks that specific input will help guide their future research.

“The other day we were speaking to a blind person, and she was mentioning how it is really important that you don’t want sudden drop-offs. For example, if there’s an uneven drain in front of you, it would be great if you could be warned about that, right?” DeFazio said.

While the team is not limiting themselves in terms of what the technology could do, their feedback and intuition lead them to believe the robots might be more useful in specific environments. Since the robots can hold maps of places that are especially difficult to navigate, they can potentially be more effective than real seeing-eye dogs at leading visually impaired people to their desired destinations.

“If this is going well, then potentially in a few years we can set up this seeing-eye robot dog at shopping malls and airports. It’s pretty much like how people use shared bicycles on campus,” Zhang said.

While still in its early stages, the team believes this research is a promising step for increasing the accessibility of public spaces for the visually impaired community. Zhang lauded Binghamton and the effective and service-driven initiative of his students.

“We are the very best public school in the Northeast area,” he said. “Our undergraduate and graduate students are fantastic. I really appreciate my students, Dave DeFazio and Eisuke Hirota. They are able to develop the software and intelligence for this robot to serve the community. They are very helpful and this kind of research is not possible at all without the support of the students.”

The team will present a paper on their research at the Conference on Robot Learning (CoRL) in November.