Real-Time Mobility Assistance for the Legally Blind

Mentor 1

Mohammad Rahman

Start Date

1-5-2020 12:00 AM

Description

With increasing autonomous technology around us, this research aims at bringing the vision to the legally blind. We are currently using a rover to develop and test the technology. It uses ultrasonic waves to detect objects at a certain distance and results in a five-bit sequence of 1/0 where 1 represents an object on path and 0 represents a clear path forward. This five-bit sequence divides the forward 180 degrees vision into five angles. This is done for both forward and backward motion of the rover. We are also using a Light Detection and Ranging (LiDAR) sensor that maps the surrounding and in conjunction with the rover inputs, it is capable of accurately detecting obstacles in the path. A webcam is used that recognizes objects using neural nets and its implementation is in progress. The webcam along with the machine learning model will be able of classifying objects as stationary and in motion and differentiate particularities like if the traffic sign for the pedestrian is on or off. All of these technologies will be integrated to provide a cohesive and holistic experience to the legally blind person to navigate on the streets independently in real-time. The information will be converted in the form of audio commands and fed to the user to follow them. The long-term perspective of the research is to eliminate every assistive measure being currently used by blind people while navigating and shrink down the technology to smart glasses that are both easy to wear and adapt and fashionable at the same time. Haptic and braille language feedbacks are also a part of long-term ways to impart the sensory information to the user seamlessly in real-time.

This document is currently not available here.

Share

COinS
 
May 1st, 12:00 AM

Real-Time Mobility Assistance for the Legally Blind

With increasing autonomous technology around us, this research aims at bringing the vision to the legally blind. We are currently using a rover to develop and test the technology. It uses ultrasonic waves to detect objects at a certain distance and results in a five-bit sequence of 1/0 where 1 represents an object on path and 0 represents a clear path forward. This five-bit sequence divides the forward 180 degrees vision into five angles. This is done for both forward and backward motion of the rover. We are also using a Light Detection and Ranging (LiDAR) sensor that maps the surrounding and in conjunction with the rover inputs, it is capable of accurately detecting obstacles in the path. A webcam is used that recognizes objects using neural nets and its implementation is in progress. The webcam along with the machine learning model will be able of classifying objects as stationary and in motion and differentiate particularities like if the traffic sign for the pedestrian is on or off. All of these technologies will be integrated to provide a cohesive and holistic experience to the legally blind person to navigate on the streets independently in real-time. The information will be converted in the form of audio commands and fed to the user to follow them. The long-term perspective of the research is to eliminate every assistive measure being currently used by blind people while navigating and shrink down the technology to smart glasses that are both easy to wear and adapt and fashionable at the same time. Haptic and braille language feedbacks are also a part of long-term ways to impart the sensory information to the user seamlessly in real-time.