Deep Reinforcement Learning-Enabled Energy-Efficient Routing Protocol for Underwater Wireless Sensor Networks

Authors

  • Yogeshwary Bommenahalli Huchegowda Shri Madhwa Vadiraja Institute of Technology & Management
  • Mahadeva Prasad M University of Mysore

Abstract

Underwater wireless sensor networks are widely used in sea and ocean exploration, monitoring of the environment, defense surveillance. These applications are restricted by limited energy availability, propagation delay of acoustic signal, and topology changes. To address these issues, a reinforcement learning (RL)-based routing protocol that combines energy-aware clustering with Q-learning to improve packet forwarding efficiency is proposed in this paper.  In this approach, the role of each autonomous agent is performed by sensor node and forwarding actions based on residual energy, hop count, and distance to the sink are adaptively selected. MATLAB simulation results demonstrate that the proposed scheme achieves a packet delivery ratio (PDR) of 95.2%. Compared with vector-based forwarding (VBF) and reinforcement learning-based opportunistic routing (RLOR), the achieved PDR is 7.6% and 3.7% higher, respectively. The performance improvement is mainly attributed to adaptive Q-learning-based next-hop selection and energy-aware clustering, which reduce redundant and long-distance transmissions and avoid routing voids. Moreover, the proposed protocol extends network lifetime to 5000 iterations, achieving improvements of 19% and 6.4%, while reducing average energy consumption by 25.7% and 13.3% compared with VBF and RLOR.

Additional Files

Published

2026-05-16

Issue

Section

Wireless and Mobile Communications