Abstract
1- Introduction
2- Related work
3- Architecture and the design of an artificial agent for WSNs
4- Artificial agent energy-efficient traffic control in WSNs
5- Experiments and performance evaluation
6- Conclusion
References
Abstract
Applications of wireless sensor networks are blooming for attacking some limits of social development, among which energy consumption and communication latency are fatal. Effective communication traffic control and management is a potential solution, so we propose a novel traffic-control system based on deep reinforcement learning, which regards traffic control as a strategy-learning process, to minimize energy consumption. Our algorithm utilizes deep neural network for learning, inputs the state of wireless sensor network as well as outputs the optimal route path. The simulation experiments demonstrate that our algorithm is feasible to control traffic in wireless sensor network and can reduce the energy consumption.
Introduction
Nowadays, with the rapid development of the Internet of Things (IoTs) technology, wireless sensor networks (WSNs), as the core component of the IoT sensing layer, have been widely applied in a variety of field [1][2]. Sensor-network technology is able to integrate many technologies, such as computers, communications, and microelectronics [3], consisting of a set of unique or heterogeneous sensors distributed in different geographical regions. It concurrently monitors the physical or environmental conditions (such as temperature, pressure, motion and pollution) through the wireless channel, and transfers the data collected to the central server to form an autonomous network, realizing the dynamic intelligent collaborative perception of the physical world. At present, it has been widely used in intelligent furniture, logistics management, health supervision, flow monitoring and other fields. Compared with a traditional wireless network, the sensor network is characterized by a large number of nodes, limited computing and storage capacity, limited power capacity, and limited communication capability [4]. Most datacollection sensor nodes are powered by batteries, which are usually deployed for life because of the poor working environment. Thus, the main challenge in the design of protocols for WSNs is energy efficiency [5][6], due to the limited amount of energy in the sensor nodes. One of the most important parts when we apply deep reinforcement learning to WSNs is the energy in the WSNs.