Learning-Based Visual Navigation under Extreme Conditions for Planetary Exploration
Keywords:
Planetary Visual Navigation, Mars Rover, SLAM, DB-Net, Value Iteration Network, Sensor Fusion, 3D Mapping, Autonomous ExplorationAbstract
Planetary visual navigation is essential for facilitating autonomous exploration of alien landscapes in harsh climatic circumstances. This research introduces a learning-based visual navigation system that combines SLAM-net with path planning and motion control to make Mars rover operation more reliable. The system uses DB-Net and Value Iteration Network (VIN) architectures that were trained on datasets of Martian topography to make sure that decisions are made quickly and paths are created. The DB-Net model, which combines global and local feature fusion, is better than VIN at navigation accuracy (95.6%) and success rate (93.3%). The addition of Vision Transformer-based SLAM improves the accuracy of localization and mapping by up to 20% in low-light and dusty circumstances. Testing the Athena rover in Mars-like environments shows that it can reliably avoid obstacles, accurately estimate its position, and change its navigation. Combining stereo vision with LiDAR sensors makes 3D perception and mapping even better. The suggested method has a 95% success rate for navigation in NASA ROAMS simulations, showing that it might be useful for durable, autonomous planetary exploration. The structure works on several sorts of terrain, such as sandy, gravel, and hard ground. This work advances the development of next-generation intelligent navigation systems for forthcoming interplanetary missions.
