An Intelligent Multi-Sensor Fusion Framework for Real-Time Obstacle Detection and Haptic- Audio Navigation Assistance for the Visually Impaired

G Mahammad Idrush, I Ashok Kumar, K Bhanu Prakash, J Sriman Narayana

Visually impaired individuals face significant challenges in independent mobility due to limited environmental perception, increasing risks of collisions and disorientation. Traditional aids like white canes provide limited range and no semantic information. This paper proposes an intelligent multisensor fusion framework for real-time obstacle detection and haptic-audio navigation assistance. The system integrates ultrasonic sensors for proximity, RGB-D camera for depth and object recognition (via lightweight CNN), IMU for motion tracking, and optional LiDAR for enhanced mapping. Data fusion employs an Extended Kalman Filter (EKF) for robust state estimation and obstacle localization, with deep learning (YOLOv8-lite + LSTM) for semantic classification (e.g., static/dynamic obstacles). Feedback is delivered via haptic vibrations (direction/intensity) and audio cues (TTS direction/ distance). Evaluated on custom indoor/outdoor datasets and real-world trials, the framework achieves high detection accuracy (95.7%), low latency (<50 ms), and improved user confidence. It enhances safety, autonomy, and inclusivity while maintaining low power and portability for wearable deployment. Keywords: Multi-Sensor Fusion, Visually Impaired Navigation, Obstacle Detection, Haptic-Audio Feedback, Extended Kalman Filter, Deep Learning, Assistive Technology.
PDF