Abstracts of
Accepted Papers

As mobile generations advance, blurring the lines between physical, biological, and digital worlds, the deployment of ultra-dense networks poses challenges in resource allocation and energy efficiency. This article revolves around the sustainability of 5G and beyond 5G (B5G) networks as cloud-native technologies. It aims to devise a feature selection methodology within monitoring frameworks, aligning with sustainability objectives. Notable open-source monitoring applications (e.g., Kepler and Scaphandre) are employed to collect data on resource utilization and energy consumption for containers and physical servers. Dependence criteria indexes, Hilbert-Schmidt Independence Criterion (HSIC) and Pearson, identify interdependencies among timeseries from different tools. Addressing challenges, the paper resolves inconsistent dataset lengths, highlights the Pearson-HSIC trade-off, and underscores the need for alignment in feature selection with resource utilization. Key findings include Scaphandre underestimation of memory impact on power consumption, Kepler's limitations in host metrics. Finally, the aforementioned results enable the development of an anomaly detection mechanism.

In this paper, we study the problem of deployment and operation of Robotic Airborne Base Stations (RABSs) in 6G wireless networks. We aim to do so, while maximizing the energy efficiency in the network. We mathematically model the problem and introduce an algorithm that allows to solve it, considering traffic load variations and anticipated energy consumption. The performance of the algorithm is assessed based on simulations. Our results demonstrate the superior performance of our approach, as compared to a greedy algorithm. In addition, we also show that, in contrast to Unmanned Aerial Vehicles-Base Stations (UAV-BSs) deployment, RABSs deployment can enable a much higher energy-efficiency.

In a virtualized radio access network (vRAN), base station functions are disaggregated into hardware (HW) and software and are hosted in commercial off-the-shelf (COTS) HW. vRANs are expected to reduce infrastructure costs and enable flexible networks. The energy consumption of radio access networks has also been increasing, causing an increase in carbon footprint and electricity costs. The use of renewable energy for distributed COTS HW in vRANs is an effective approach. However, these systems face two types of imbalance in the energy gap between energy demand and renewable energy supply. One is the spatial imbalance across different HW at a given time, and the other is the temporal imbalance at different times on the same HW. These imbalances result in excessive use of non-renewable energy and wasted renewable energy due to battery overflow. In this paper, to solve this problem, we propose a processing task offloading technique designed by considering both imbalances. We confirm the effectiveness of the technique through computer simulations.

The widespread adoption of machine learning (ML) across various industries has raised sustainability concerns due to its substantial energy usage and carbon emissions. This issue becomes more pressing in adversarial ML, which focuses on enhancing model security against different network-based attacks. Implementing defenses in ML systems often necessitates additional computational resources and network security measures, exacerbating their environmental impacts. In this paper, we pioneer the first investigation into adversarial ML's carbon footprint, providing empirical evidence connecting greater model robustness to higher emissions. Addressing the critical need to quantify this trade-off, we introduce the Robustness Carbon Trade-off Index (RCTI). This novel metric, inspired by economic elasticity principles, captures the sensitivity of carbon emissions to changes in adversarial robustness. We demonstrate the RCTI through an experiment involving evasion attacks, analyzing the interplay between robustness against attacks, performance, and carbon emissions.

Efficient communication for internet-of-things (IoT) devices in remote rural areas, where resources are limited, remains a significant challenge. A communication technique capable of long-range, low-power communication is essential. Chirp spread spectrum (CSS) combined with multi-antenna transmission could be a potential solution.  However, it's challenging to implement a multiple-input multiple-output (MIMO) configuration in small nodes. In this paper, we delve into the concept of distributed transmission using CSS waveforms to achieve extended communication range while maintaining the same transmit power. Achieving perfect synchronization among transmitters is intricate and time-consuming. Therefore, we resort to non-coherent distributed transmission (NCDT). In the presence of asynchronous transmitters, symbol detection is carried out using a least square (LS) detector. Simulation results demonstrate the better performance of the proposed NCDT-CSS in terms of bit error rate (BER), making it an efficient solution with less power usage for IoT devices in rural communication settings.

The booming of Internet-of-Things (IoT) is expected to provide more intelligent and reliable communication services for higher network coverage, massive connectivity, and low-cost solutions for 6G services. However, frequent charging and battery replacement of these massive IoT devices brings a series of challenges. Zero energy devices, which rely on energy-harvesting technologies and can operate without battery replacement or charging, play a pivotal role in facilitating the massive use of IoT devices. In order to enable reliable communications of such low-power devices, Manchester-coded on-off keying (OOK) modulation and non-coherent detections are attractive techniques due to their energy efficiency, robustness in noisy environments, and simplicity in receiver design. Moreover, to extend their communication range, employing channel coding along with enhanced detection schemes is crucial. In this paper,  a novel soft-decision decoder is designed for OOK-based low-power receivers to enhance their detection performance.  In addition, exact closed-form expressions and two simplified approximations are derived for the log-likelihood ratio (LLR), an essential metric for soft decoding. Numerical results demonstrate the significant coverage gain achieved through soft decoding for convolutional code.

UnderWater (UW) communication is a challenging task due to its difficult environment, which can cause high data corruption and loss. Indeed, UW communication channels experience limited bandwidth, high time variability and much longer delays as compared to traditional terrestrial channels. This leads to frequent retransmissions, which consume valuable energy for the network nodes and shorten their lifetime. One way to cope with this issue is to employ reliable smart protocols that can improve the packet routing in an UW network to optimize transmission efficiency, minimize latency, save energy, and ensure network robustness. To this aim, in this paper, we propose a rout- ing algorithm, called BOUNCE, for UnderWater Acoustic (UWA) networks. BOUNCE considers the channel conditions to route packets to the nodes that will ensure the highest data quality, the fastest routing speed, and a balanced energy consumption for the entire network. It is evaluated through simulations whose results show that BOUNCE outperforms existing routing algorithms in terms of data quality, routing speed, and energy efficiency.

From 1G to 5G, multiple access technology plays a critical role in the field of wireless communications. Among the potential multiple access technologies for future wireless communication networks, Model Division Multiple Access (MDMA) stands out due to its ability to enhance both spectrum efficiency and feasibility regions. In comparison to conventional schemes, the MDMA scheme offers superior performance gains. By optimizing resource utilization and improving spectral efficiency, the MDMA scheme contributes to reducing energy consumption and carbon footprint in wireless communication networks. By employing MDMA, relay-assisted cooperative networks can effectively utilize resources and improve overall performance, thus serving as a valuable infrastructure for wireless green communication. In this paper, we propose a relay cooperative communication network based on MDMA, where each link represents an independently and identically distributed Rayleigh fading channel. The network consists of two source nodes, an arbitrary number of decode-and-forward (DF) relay nodes, and one destination node. At the destination, maximal ratio combining (MRC) is employed to combine the signals received from both the source nodes and relay nodes, with the goal of achieving better energy efficiency. Analytical expressions for the outage probability and resource utilization efficiency are derived using the moment generating function (MGF) and state transition matrix (STM). These closed-form solutions provide valuable insights into the performance of the proposed network. The theoretical analysis is further corroborated by both theoretical and simulation results, which validate the effectiveness of the proposed approach.

Virtualized Radio Access Networks are a promising solution to reduce the energy consumption of 5G networks. However, the shift from traditional to virtualized RANs has rendered existing energy models for RANs obsolete. In this paper, we present a methodology and testbed to develop an energy model for vRANs. Our methodology estimates the energy consumption of Baseband Unit functions based on performance monitoring counters and utilization metrics collected on a testbed. We provide experimental results that offer insights into the distribution of energy across RAN functions. These results suggest that, under our experimental conditions, the physical layer is considerably more energy-intensive compared to other layers. We believe that our methodology can serve as a foundation for constructing an energy consumption model that will facilitate the development of energy-aware load balancers and the selection of energy-efficient RAN deployments, leading to significant energy savings.

Energy saving in wireless networks is growing in importance due to increasing demand for evolving new-gen cellular networks, environmental and regulatory concerns, and potential energy crises arising from geopolitical tensions. In this work, we propose an approximate dynamic programming (ADP)-based method coupled with online optimization to switch on/off the cells of base stations to reduce network power consumption while maintaining adequate Quality of Service (QoS) metrics. We use a multilayer perceptron (MLP) given each state-action pair to predict the power consumption to approximate the value function in ADP for selecting the action with optimal expected power saved. To save the largest possible power consumption without deteriorating QoS, we include another MLP to predict QoS and a long short-term memory (LSTM) for predicting handovers, incorporated into an online optimization algorithm producing an adaptive QoS threshold for filtering cell switching actions based on the overall QoS history. The performance of the method is evaluated using a practical network simulator with various real-world scenarios with dynamic traffic patterns.

Data collection from multiple cells is essential to build AI/ML based models in next generation 5G/6G cellular networks. However, pulling data from high percentage of the cells in the network to train a model, would require high training complexity overhead comprising of network (or spectrum) bandwidth, compute and storage. From the network field data, we observe the data distribution from multiple cells are not independent and identically distributed (i.i.d.), but are rather biased in the region of operations, and using data from high percentage of cells in the network to train the AI/ML models, as done in some previous works, may be suboptimal. Therefore, an efficient cell selection scheme for data collection to build AI/ML is critical for wireless cellular systems to optimize model performance while requiring low training complexity overhead. In this work, we propose novel schemes which incorporate submodular functions structure to select the cells for gathering training data. For the network energy saving use case, we train load forecasting model on cells selected from our proposed approach, and demonstrate upto 2X advantage in terms of top 5% cell throughput drop performance while having similar power saving w.r.t. the baseline scheme of collecting training data from all the cells in the network. Additionally, we show up to 90X reduction in training data collection using our proposed scheme.

Federated learning (FL) provides a promising collaborative framework to build a model from distributed clients, and this work investigates the carbon emission of the FL process.   Cloud and edge servers hosting FL clients may exhibit diverse carbon footprints influenced by their geographical locations with varying power sources, offering opportunities to reduce carbon emissions by training local models with adaptive computations and communications.  In this paper, we propose FedGreen, a carbon-aware FL approach to efficiently train models by adopting adaptive model sizes shared with clients based on their carbon profiles and locations using ordered dropout as a model compression technique.  We theoretically analyze the trade-offs between the produced carbon emissions and the convergence accuracy, considering the carbon intensity discrepancy across countries to choose the parameters optimally.  Empirical studies show that FedGreen can substantially reduce the carbon footprints of FL compared to the state-of-the-art while maintaining competitive model accuracy.

This paper provides a brief overview of a 5G-advanced amplify-and-forward repeater technology, called network-controlled repeater (NCR).  A 5G base-station can utilize an NCR to extend its coverage in a cost-efficient manner. The network-controllability of NCR allows the base-station to offer a better performance using the NCR, compared to the conventional repeaters. This paper investigates repeaters from the network energy consumption point of view and shows allowing the network to also control the NCR's power can unleash extra benefits from the NCR and enable further performance and energy optimizations. The performance of NCR is compared with Reconfigurable Intelligent Surface (RIS) and it is shown while the repeaters may consume more power than RIS, they may enable more energy saving opportunities at the base-station and/or the users, and hence provide an overall more energy-efficient solution for 5G-advanced and 6G networks.

The use of millimeter wave (mmWave) bands in 5G radio access networks (RANs) is an appealing solution to meet high data volume demands. The wide channel bandwidth of mmWave complements the integrated access and backhaul (IAB) technique, helping to ameliorate the challenges of implementing fiber cable backhaul in a dense small cell RAN infrastructure. This work analyzes the capacity, energy consumption, and energy efficiency (EE) of a homogeneous dense small cell RAN using the IAB technique in mmWave frequency bands. The results show that introducing IAB causes the RAN throughput and, hence, EE to degrade significantly. However, the careful application of sleep mode techniques, whereby empty small cells are completely turned off, mitigates the EE degradation. In a high traffic load scenario with 900 users per km^2 the EE increases on average by 3.12-fold with deep sleep modes enabled.

Energy Efficiency is a crucial consideration in 5G and Beyond especially in RAN. Base Station (BS) in RAN comprising of Distributed Unit (DU), Central Unit (CU) and Radio Unit (RU) consume significant power. Varying network traffic patterns provides an opportunity to save power during low-traffic periods. Existing methods are not proactive and lack real-time training capabilities. In this paper, we propose an intelligent power management method that proactively adjusts the operating CPU frequency of BS units based on predicted core load. Multi-core processor in BS may have multiple coregroups, each dedicated to different applications having different core loads. To ensure better generalization and avoid over fitting the prediction model to a single BS, data from multiple BSs are combined at a Central Management Entity (CME). We derive simple mathematical formulas for core load prediction of each core-group using Reinforcement Learning (RL) at CME. Subsequently, the appropriate operating CPU frequency for each BS is determined based on the predicted core load of multiple core-groups. Experiments are conducted using real network traces at RAN Distributed Unit (DU) comprising of core-groups Internet Protocol (IP) & Transport Layer, Radio Link Control (RLC), and Linux applications. The proposed adaptive Power Management method reduces processor power consumption by around 10% leading to significant cost savings in commercial deployments.

With double digits annual growth rate, Narrowband IoT (NB-IoT) has been gaining attraction since inception. This cellular radio access technology suits well for use cases requiring energy efficient operation, low complexity and cost, and deployments of massive number of devices. However, there are power and latency sensitive IoT use cases such as actuators that need a better solution than extended discontinuous reception (eDRX) and Power Saving Mode (PSM) which are limited by the power/latency tradeoff. The Wake-Up Signal (WUS) defined in NB-IoT provides a potential path to achieve energy efficient operation without compromising on latency. Moreover, the WUS/wake-up receiver (WUR) approach can further lower the average power with a potential for large power-saving gain. In this paper, an NB-IoT system prototype with WUS/WUR implementation is presented. Furthermore, an extensive power-saving analysis based on the implementation is performed. To our knowledge, this is the first work that provides power-saving analysis based on NB-IoT WUS/WUR prototype implementation. It offers insight on the relationship between power saving and NB-IoT system parameters under different scenarios.

This paper studies the adoption of simultaneous wireless information and power transfer (SWIPT) technology in IEEE 802.11ah networks as a way to achieve energy sustainable operation. For that, we look at a specific class of Internet of Things (IoT) applications where devices generate and transmit a single data frame independently and periodically, with a time period much larger than a typical beacon interval in IEEE 802.11ah networks. Each device harvests energy from every received beacon frame, and goes to sleep state in every beacon interval, except when it has to wake up in its assigned RAW slot to transmit its single data frame. We investigate the condition to achieve energy sustainable operation with power-splitting SWIPT by considering channel propagation models typically adopted in IEEE 802.11ah networks. The analysis provides an upper bound on the distance between AP and device to achieve energy-sustainable operation in the application scenario. We present numerical results on the upper bound to showcase its applicability in network planning, and ns-3 simulation results based on the SWIPT module we developed to validate the upper bounds.