Low-Latency Energy-Efficient Cyber-Physical Disaster System: Reported works on cyber-physical disaster systems (CPDS) deal with the assessment of loss and damage aftermath of a large-scale disaster such as earthquake, wildfire, and cyclone, etc. involves collecting data from the IoT devices sent to the cloud data centers for analysis, often causes high bandwidth usage with substantial delay. In our work, we have shown to eliminate bandwidth cost and reducing latency substantially suitable for post-disaster response for rescue operations. We propose a low-latency and energy-efficient CPDS applying cloud-IoT-edge by bringing intelligence and infer-encing proximity to the disaster site to detect the disaster events in real-time and inform to the rescue teams. The edge computing model of CPDS uses convolutional neural network (CNN) with MobileNetV2 lightweight model and gradient weighted class activation mapping (Grad-CAM++) to locate and quantify degree of the damage into classes- severe, mild, and no damage. We implemented CPDS on a real-world laboratory testbed that comprises resource-constrained edge devices (Raspberry Pi, smartphones, and PCs) and docker-based containerization of deep learning models and analyzed the computational complexity. With the rigorous experiments of the proposed approach, we evaluated the performance in terms of classification accuracy, energy saving, and end-to-end (E2E) delay comparing with the current state-of-the-art approaches.
The details of this work is available in the following paper: [Cloud-IoT-Edge-CPDMS]
Intelligent Computational Techniques for the Better World 2020: Concepts, Methodologies, Tools, and Applications: Over the past few decades, researchers, practitioners are increasingly moving toward the domain of searching, and optimization, by using advanced machine learning concepts based on nature-inspired computation, and metaheuristics, to solve problems spanning across all the spectrums of human endeavor. Evolutionary and nature-inspired techniques have granted us incredible power to solve multi-model and combinatorial problems in a smarter way. Deep learning, a new frontier in AI research, has revolutionized machine learning and related AI talent to next level of constructing algorithms which can make the system intelligent enough to become a better analyzer. These techniques and concepts are inspired from nature and biological behaviors. The intelligent use of these techniques, collectively known as smart techniques, has driven us to solve complex computational problems in areas of diversified domain in an affordable amount of time. Clearly, these smart techniques involve complex processes that are evolving very fast to take over in all spheres of the world affairs. This introductory chapter aims to provide an in-depth study of intelligent computational techniques and its interdisciplinary applications in different domains. To stimulate the future work, we conclude the chapter proposing new possible research directions and outline several open issues.
The details of this work is available in the following paper: [Intelligent-Cloud]
Cloud of Things Assimilation with Cyber Physical System: A Review: Cloud of Things (CoT) provides a smart intelligent platform having capability of doing limitless computations by using the resources in an optimized fashion. With the advent of social networks, modern-day society relies on advanced computing technologies and communication infrastructures to share real-world events. Here comes the role of typical cyber-physical systems (CPSs) that derives tightly coupled computations with the physical world by integrating the computations with physical processes. The linkage between the computational resources and the physical systems can be done by using sensors and actuators. CoT can play a crucial role for extending the capabilities of CPSs. While this integration is still seeking more efforts for its own shape, the future CPS can dynamically adopted in different domains like healthcare, manufacturing, disaster management, agriculture and transportation etc. The deficiency of overall architectural awareness provides ample space and motivation for the academicians and industries to get involved in further studies. In this chapter, a complete literature study of CoT and CPSs oriented architectures are done which will act as a catalyst to improve the research efforts and understanding of tools and techniques. It also presents the current research opening in this area. The study is made to look beyond the current trends of architectures. The major contribution of this study is to summarize the CoT and CPSs oriented critical infrastructures with respect to the cutting-edge technologies and design considerations with respect to overall impact on the real-world.
The details of this work is available in the following paper: [Cloud of Things]
Modeling the Green Cloud Continuum: integrating energy considerations into Cloud–Edge models: The energy consumption of Cloud–Edge systems is becoming a critical concern economically, environmentally, and societally; some studies suggest data centers and networks will collectively consume 18% of global electrical power by 2030. New methods are needed to mitigate this consumption, e.g. energy-aware workload scheduling, improved usage of renewable energy sources, etc. These schemes need to understand the interaction between energy considerations and Cloud–Edge components. Model-based approaches are an effective way to do this; however, current theoretical Cloud–Edge models are limited, and few consider energy factors. This paper analyses all relevant models proposed between 2016 and 2023, discovers key omissions, and identifies the major energy considerations that need to be addressed for Green Cloud–Edge systems (including interaction with energy providers). We investigate how these can be integrated into existing and aggregated models, and conclude with the high-level architecture of our proposed solution to integrate energy and Cloud–Edge models together.
The details of this work is available in the following paper: [Modeling the Green Cloud Continuum]
Formal Models for the Energy-Aware Cloud-Edge Computing Continuum: Analysis and Challenges: Cloud infrastructures are rapidly evolving from centralised systems to geographically distributed federations of edge devices, fog nodes, and clouds. These federations (often referred to as the Cloud-Edge Continuum) are the foundation upon which most modern digital systems depend, and consume enormous amounts of energy. This consumption is becoming a critical issue as society's energy challenges grow, and is a great concern for power grids which must balance the needs of clouds against other users. The Continuum is highly dynamic, mobile, and complex; new methods to improve energy efficiency must be based on formal scientific models that identify and take into account a huge range of heterogeneous components, interactions, stochastic properties, and (potentially contradictory) service-level agreements and stakeholder objectives. Currently, few formal models of federated Cloud-Edge systems exist - and none adequately represent and integrate energy considerations (e.g. multiple providers, renewable energy sources, pricing, and the need to balance consumption over large-areas with other non-Cloud consumers, etc.). This paper conducts a systematic analysis of current approaches to modelling Cloud, Cloud-Edge, and federated Continuum systems with an emphasis on the integration of energy considerations. We identify key omissions in the literature, and propose an initial high-level architecture and approach to begin addressing these - with the ultimate goal to develop a set of integrated models that include data centres, edge devices, fog nodes, energy providers, software workloads, end users, and stakeholder requirements and objectives. We conclude by highlighting the key research challenges that must be addressed to enable meaningful energy-aware Cloud-Edge Continuum modelling and simulation.
The details of this work is available in the following paper: [Formal Models for the Energy-Aware Cloud-Edge Computing Continuum]
MAG-D: A multivariate attention network based approach for cloud workload forecasting: The Coronavirus pandemic and the work-from-home have drastically changed the working style and forced us to rapidly shift towards cloud-based platforms & services for seamless functioning. The pandemic has accelerated a permanent shift in cloud migration. It is estimated that over 95% of digital workloads will reside in cloud-native platforms. Real-time workload forecasting and efficient resource management are two critical challenges for cloud service providers. As cloud workloads are highly volatile and chaotic due to their time-varying nature; thus classical machine learning-based prediction models failed to acquire accurate forecasting. Recent advances in deep learning have gained massive popularity in forecasting highly nonlinear cloud workloads; however, they failed to achieve excellent forecasting outcomes. Consequently, demands for designing more accurate forecasting algorithms exist. Therefore, in this work, we propose ’MAG-D’, a Multivariate Attention and Gated recurrent unit based Deep learning approach for Cloud workload forecasting in data centers. We performed an extensive set of experiments on the Google cluster traces, and we confirm that MAG-DL exploits the long-range nonlinear dependencies of cloud workload and improves the prediction accuracy on average compared to the recent techniques applying hybrid methods using Long Short Term Memory Network (LSTM), Convolutional Neural Network (CNN), Gated Recurrent Units (GRU), and Bidirectional Long Short Term Memory Network (BiLSTM).
The details of this work is available in the following paper: [MAG-D: A multivariate attention network]
STOWP: A light-weight deep residual network integrated windowing strategy for storage workload prediction in cloud systems: Accurate storage workload forecasting of big data applications is a constructive approach to improve the job scheduling and fine-grained load balancing in real-time cluster systems. However, despite the recent advances of deep learning architectures, demands for more accurate workload time series forecasting algorithm exist. Therefore, we propose a light-weight STOrage Workload time series Prediction method named as ‘STOWP’ integrating Neural Basis Expansion Analysis (N-BEATS) deep model with windowing strategy. The STOWP approach implements a multi-input–multi-output (MIMO) window strategy for capturing the historical storage variation patterns of the workload data. Furthermore, a within window scaling strategy is adopted to effectively estimate the diversity of the workload requests during different time horizons. For experimental evaluation, we used Web-Search dataset containing Search Engine’s I/O real-time workload traces. To improve the performance of STOWP, the hyper-parameters’ sensitivity is well investigated. Through results, we observed that the ’STOWP’ improves the RMSE by 3.33% and MAE by 3.44% atleast in comparison with the existing benchmarks storage workload forecasting techniques.
The details of this work is available in the following paper: [Storage workload prediction in cloud systems]
Deep learning-based multivariate resource utilization prediction for hotspots and coldspots mitigation in green cloud data centers: Dynamic virtual machine (VM) consolidation is a constructive technique to enhance resource usage and is extensively employed to minimize data centers’ energy consumption. However, in the current approaches, consolidation techniques are heavily relied on reducing the actively used physical servers (PMs) based on their current resource utilization without considering future resource demands. Also, many of the reported works for cloud workload prediction applied univariate time series-based forecasting models and neglected the dependency of other resource utilization metrics. Thus, resulting in inaccurate predictions, unnecessary migrations, high migration costs, and increased service level agreement violations (SLAVs) may nullify the consolidation benefits. To efficiently address this issue, we propose a multivariate resource usage prediction-based hotspots and coldspots mitigation approach that considers both the current and future usage of resources with O(sk) time complexity, where s and k denote the number of PMs and VMs, respectively. The proposed technique uses a clustering-based stacked bidirectional (Long Short-Term Memory) LSTM deep learning network to predict the future memory and CPU usage of PMs and VMs with high accuracy and O((Q(Q+W)∗Θ) computational complexity, where Q, W, and Θ represent the number of hidden layer cells, outputs, and training epochs, respectively. Through extensive simulations based on Google’s cluster workload traces, we demonstrate that our proposed method obtains substantial improvements in terms of prediction performance, energy-efficiency, actively used PMs, VM migrations, and SLA violations over the benchmark approaches.
The details of this work is available in the following paper: [Multivariate resource utilization prediction]
On demand clock synchronization for live VM migration in distributed cloud data centers: Live migration of virtual machines (VMs) has become an extremely powerful tool for cloud data center management and provides significant benefits of seamless VM mobility among physical hosts within a data center or across multiple data centers without interrupting the running service. However, with all the enhanced techniques that ensure a smooth and flexible migration, the down-time of any VM during a live migration could still be in a range of few milliseconds to seconds. But many time-sensitive applications and services cannot afford this extended down-time, and their clocks must be perfectly synchronized to ensure no loss of events or information. In such a virtualized environment, clock synchronization with minute precision and error boundedness are one of the most complex and tedious tasks for system performance. In this paper, we propose enhanced DTP and wireless PTP based clock synchronization algorithms to achieve high precision at intra and inter-cloud data center networks. We thoroughly analyze the performance of the proposed algorithms using different clock measurements. Through simulation and real-time experiments, we also show the effect of various performance parameters on the data center networking architectures.
The details of this work is available in the following paper: [On-Demand-Clock-Synchronization]
Interval graph multi-coloring-based resource reservation for energy-efficient containerized cloud data centers: Containerized deployment of microservices has quickly become a well-known virtualization technology due to its higher portability, scalability, good isolation, and lightweight solutions. However, it faces several challenges in terms of the capital and operational expenses in large-scale data centers. In particular, services in cloud are usually instantiated as a group of containers, which continuously trigger frequent communication workloads and hence significantly degrades the service performance due to inefficient allocation of containers. Thus to deploy microservices, service providers must consider different types of objectives, such as optimizing the communication cost or the operational cost, which are joint objectives that have previously been studied independently. In this paper, we study the problem of communication-aware container-based advance reservation to optimize the energy and communication cost for microservices deployment. We applied the interval graph model to map the container reservation scenario of microservices and derived various performance bounds. Then, we propose greedy graph multi-coloring-based centralized and distributed algorithms to find an efficient allocation of containers. Through theoretical analysis and extensive experimental results, we demonstrate that the proposed approaches can decrease the total cost by up to 31% compared to the current state-of-the-art methods.
The details of this work is available in the following paper: [Containerized-Cloud]
Energy and cost trade-off for computational tasks offloading in mobile multi-tenant clouds: Mobile cloud computing augments smart-phones with computation capabilities by offloading computations to the cloud. Recent works only consider the energy savings of mobile devices while neglecting the cost incurred to the tasks which are offloaded. We might offload several tasks to minimize the total energy consumption of mobile devices; however, this could incur a huge monetary cost. Furthermore, these issues become more complex in considering the multi-tenant cloud, which is not addressed in literature adequately. Thus, to balance the trade-off between monetary cost and energy consumption of the mobile devices, we need to decide whether to offload the task to the cloud or run it locally. In this article, first, we have formulated a ‘MinEMC’ optimization problem to minimize both the energy as well as the monetary cost of the mobile devices. The ‘MinEMC’ problem is proven to be NP-hard. We formulate a special case with an equal amount of resource requirement by each task for which a polynomial-time solution is presented. Further various policies are proposed, the cloud can employ to solve the general case. Then we proposed an efficient heuristic named ‘Off-Mat’ based on distributed stable matching, the solution for which determines whether the tasks are to be offloaded or not under multi-constraints. We also analyze the complexity of this proposed heuristic algorithm. Finally, performance evaluation through simulation results demonstrates that the Off-Mat algorithm attains high-performance in computational tasks offloading and scale well as the number of tenants increases.
The details of this work is available in the following paper: [Mobile multi-tenant clouds]
Truthful online double auction based dynamic resource provisioning for multi-objective trade-offs in IaaS clouds: Auction designs have recently been adopted for static and dynamic resource provisioning in IaaS clouds, such as Microsoft Azure and Amazon EC2. However, the existing mechanisms are mostly restricted to simple auctions, single-objective, offline setting, one-sided interactions either among cloud users or cloud service providers (CSPs), and possible misreports of cloud user’s private information. This paper proposes a more realistic scenario of online auctioning for IaaS clouds, with the unique characteristics of elasticity for time-varying arrival of cloud user requests under the time-based server maintenance in cloud data centers. We propose an online truthful double auction technique for balancing the multi-objective trade-offs between energy, revenue, and performance in IaaS clouds, consisting of a weighted bipartite matching based winning-bid determination algorithm for resource allocation and a Vickrey–Clarke–Groves (VCG) driven algorithm for payment calculation of winning bids. Through rigorous theoretical analysis and extensive trace-driven simulation studies exploiting Google cluster workload traces, we demonstrate that our mechanism significantly improves the performance while promising truthfulness, heterogeneity, economic efficiency, individual rationality, and has a polynomial-time computational complexity.
The details of this work is available in the following paper: [IaaS-Cloud]