Deep reinforcement learning offers a model-free alternative to supervised deep learning and classical optimization for solving the transmit power control problem in wireless networks. The multi-agent deep reinforcement learning approach considers each transmitter as an individual learning agent that determines its transmit power level by observing the local wireless environment. Following a certain policy, these agents learn to collaboratively maximize a global objective, e.g., a sum-rate utility function. This multi-agent scheme is easily scalable and practically applicable to large-scale cellular networks. In this work, we present a distributively executed continuous power control algorithm with the help of deep actor-critic learning, and more specifically, by adapting deep deterministic policy gradient. Furthermore, we integrate the proposed power control algorithm to a time-slotted system where devices are mobile and channel conditions change rapidly. We demonstrate the functionality of the proposed algorithm using simulation results.
N2 - Deep reinforcement learning offers a model-free alternative to supervised deep learning and classical optimization for solving the transmit power control problem in wireless networks. The multi-agent deep reinforcement learning approach considers each transmitter as an individual learning agent that determines its transmit power level by observing the local wireless environment. Following a certain policy, these agents learn to collaboratively maximize a global objective, e.g., a sum-rate utility function. This multi-agent scheme is easily scalable and practically applicable to large-scale cellular networks. In this work, we present a distributively executed continuous power control algorithm with the help of deep actor-critic learning, and more specifically, by adapting deep deterministic policy gradient. Furthermore, we integrate the proposed power control algorithm to a time-slotted system where devices are mobile and channel conditions change rapidly. We demonstrate the functionality of the proposed algorithm using simulation results.
AB - Deep reinforcement learning offers a model-free alternative to supervised deep learning and classical optimization for solving the transmit power control problem in wireless networks. The multi-agent deep reinforcement learning approach considers each transmitter as an individual learning agent that determines its transmit power level by observing the local wireless environment. Following a certain policy, these agents learn to collaboratively maximize a global objective, e.g., a sum-rate utility function. This multi-agent scheme is easily scalable and practically applicable to large-scale cellular networks. In this work, we present a distributively executed continuous power control algorithm with the help of deep actor-critic learning, and more specifically, by adapting deep deterministic policy gradient. Furthermore, we integrate the proposed power control algorithm to a time-slotted system where devices are mobile and channel conditions change rapidly. We demonstrate the functionality of the proposed algorithm using simulation results.
In next generation wireless network (NGWN), mobile users are capable of connecting to the core network through various heterogeneous wireless access networks, such as cellular network, wireless metropolitan area network (WMAN), wireless local area network (WLAN), and ad hoc network. NGWN is expected to provide high-bandwidth connectivity with guaranteed quality-of-service to mobile users in a seamless manner; however, this desired function demands seamless coordination of the heterogeneous radio access network (RAN) technologies. In recent years, some researches have been conducted to design radio resource management (RRM) architectures and algorithms for NGWN; however, few studies stress the problem of joint network performance optimization, which is an essential goal for a cooperative service providing scenario. Furthermore, while some authors consider the competition among the service providers, the QoS requirements of users and the resource competition within access networks are not fully considered. In this paper, we present an interworking integrated network architecture, which is responsible for monitoring the status information of different radio access technologies (RATs) and executing the resource allocation algorithm. Within this architecture, the problem of joint bandwidth allocation for heterogeneous integrated networks is formulated based on utility function theory and bankruptcy game theory. The proposed bandwidth allocation scheme comprises two successive stages, i.e., service bandwidth allocation and user bandwidth allocation. At the service bandwidth allocation stage, the optimal amount of bandwidth for different types of services in each network is allocated based on the criterion of joint utility maximization. At the user bandwidth allocation stage, the service bandwidth in each network is optimally allocated among users in the network according to bankruptcy game theory. Numerical results demonstrate the efficiency of the proposed algorithm.
But Orbit differs from many other versions of this kind of GIS-mobile-utility enterprise integration, in that it uses the Microsoft Windows Azure cloud platform as its core, as Danny Petrecca, enterprise GIS product management director at Schneider, explained in an interview this week.
Using a cloud-based system should make data-sharing and integration much easier and more cost-effective than building interfaces between lots of legacy systems, he said. That, in turn, could open up the system to everything from utility operations modeling and planning software to the enterprise systems that drive decision-making at the executive level.
caa09b180b