Reinforcement Learning-Based Data Rate Congestion Control for Vehicular Ad-Hoc Networks. MSc Thesis Defense by: Gnana Shilpa Nuthalapati

Monday, June 19, 2023 - 11:00 to 12:00

The School of Computer Science is pleased to present…

Reinforcement Learning-Based Data Rate Congestion Control for Vehicular Ad-Hoc Networks.

MSc Thesis Defense by: Gnana Shilpa Nuthalapati

 

Date: Monday, June 19th

Time:  11:00 AM – 12:00pm

Location: Essex Hall Room 122

 

Abstract:

Vehicular Ad-Hoc Network (VANET) is an emerging wireless technology vital to the Intelligent Transportation System (ITS) for vehicle-to-vehicle and vehicle-to-infrastructure communication. ITS aims to minimize traffic problems and improve the safety of transport by preventing unexpected events. When the vehicle density, i.e., the number of vehicles communicating in a wireless channel, increases, the channel faces congestion resulting in unreliable safety applications. Various decentralized congestion control algorithms have been proposed to effectively decrease channel congestion by controlling transmission parameters like message rate, transmission power, and data rate. This thesis proposes a Data rate congestion control technique using the Q-Learning algorithm to reduce and maintain the channel load near the target channel threshold. Q-learning is a model-free Reinforcement Learning algorithm that is independent of any predefined model. It learns the values of an action within a specific state without relying on an explicit model of the environment. The proposed approach has a set of states (vehicle densities and Channel Busy Ratio (CBR)) and actions (data rates) and will find the best action for each state. The target is to train the vehicle to select the most appropriate data rate to send out a BSM by maintaining the channel load near the target threshold value. We use the Q-Learning algorithm with data obtained from a simulated dynamic traffic environment. We define a reward function combining CBR and Data rate to maintain the channel load near the target threshold with the least data rate possible. Simulation results show that the Data rate congestion control technique using the Q-Learning algorithm performs better over other techniques like Transmit Data rate Control (TDRC), Data Rate based Decentralized Congestion Control (DR-DCC) and Data Rate Control Algorithm (DRCA) in medium load and better over TDRC and DR-DCC in heavy load. by reducing the Channel Busy Ratio (CBR) and resulting in lower packet loss.

Keywords: [Insert 3-5 keywords] VANET, Congestion Control, Reinforcement Learning, Q-Learning, Channel Busy Ratio (CBR)

Thesis Committee:

Internal Reader:  Dr. Shaoquan Jiang      

External Reader: Dr. Kevin Li       

Advisor: Dr. Arunita Jaekel

Co-Advisor: Dr. Ning Zhang

Chair:    Dr. Ikjot Saini