Research

My research focus is on energy efficiency aspects of systems from load balancing in data centers to EV integration in smart grid. The main tool for achieving energy efficiency in most of my work is the flexibility in time that comes from the service level agreements (SLAs) between service provider and service receiver. As these systems are often online i.e. future information is not available at the time of execution, we name our approach as ‘dynamic deferral’. We have devised spatiotemporal dynamic deferral techniques for load balancing in geographically distributed data centers, for capacity provisioning inside a data center and for right-sizing data center networks. We also investigate the impact of dynamic deferral on user satisfaction and devise techniques to capture the loss in value of deferred execution by utility functions and make a trade-off between user satisfaction and energy efficiency. In the context of smart grid, we have proposed EV task deferral techniques for peak shaving, valley filling and increase renewable energy usage.


Geographical Load Balancing

Increasing energy prices and ability to dynamically track these price variations due to enhancements to the electrical grid raise the possibility of utilizing “cloud computing” for energy efficient computing. Recently geographical load balancing techniques have been suggested for data centers hosting cloud computation in order to reduce energy cost by exploiting the electricity price differences across regions. However, these algorithms do not draw distinction among diverse requirements for responsiveness across various workloads. In this work [6], we use the flexibility from the Service Level Agreements (SLAs) to differentiate among workloads under bounded latency requirements and propose a novel approach for cost savings for geographical load balancing. We investigate how much workload to be executed in each data center and how much workload to be delayed and migrated to other data centers for energy saving while meeting deadlines. We present an offline formulation for geographical load balancing problem with dynamic deferral and give online algorithms to determine the assignment of workload to the data centers and the migration of workload between data centers in order to adapt with dynamic electricity price changes. We compare our algorithms with the greedy approach and show that significant cost savings can be achieved by migration of workload and dynamic deferral with future electricity price prediction. We validate our algorithms on MapReduce traces and show that geographic load balancing with dynamic deferral can provide 20-30% cost-savings.


Utility-Aware Deferred Load Balancing in the Cloud 

Distributed computing resources in a cloud computing environment provides an opportunity to reduce energy and its cost by shifting loads in response to dynamically varying availability of energy. This variation in electrical power availability is represented in its dynamically changing price that can be used to drive workload deferral against performance requirements. But such deferral may cause user dissatisfaction. In this work [5], we quantify the impact of deferral on user satisfaction and utilize flexibility from the service level agreements (SLAs) for deferral to adapt with dynamic price variation. We differentiate among the jobs based on their requirements for responsiveness and schedule them for energy saving while meeting deadlines and user satisfaction. Representing utility as decaying functions along with workload deferral, we make a balance between loss of user satisfaction and energy efficiency. We model delay as decaying functions and guarantee that no job violates the maximum deadline, and we minimize the overall energy cost. Our simulation on MapReduce traces show that energy consumption can be reduced by ~15%, with such utility-aware deferred load balancing. We also found that considering utility as a decaying function gives better cost reduction than load balancing with a fixed deadline.


Capacity Provisioning in Data Centers 

Recent increase in energy prices has led researchers to find better ways for capacity provisioning in data centers to reduce energy wastage due to the variation in workload. In this work [4], we explore the opportunity for energy cost saving in data centers that utilizes the flexibility from the Service Level Agreements (SLAs) and proposes a novel approach for capacity provisioning under bounded latency requirements of the workload. We investigate how many servers to keep active and how much workload to delay for energy saving while meeting every deadline. We present an offline LP formulation for capacity provisioning by dynamic deferral and give two online algorithms to determine the capacity of the data center and the assignment of workload to servers dynamically. We prove the feasibility of the online algorithms and show that their worst case performances are bounded by constant factors with respect to the offline formulation. We validate our algorithms on MapReduce workload by provisioning capacity on a Hadoop cluster and show that the algorithms actually perform much better in practice compared to the naive `follow the workload' provisioning, resulting in 20-40% cost-savings.


Right-Sizing of Data Center Networks 

Data center topologies typically consist of multi-rooted trees with many equal-cost paths between a given pair of hosts. Existing power optimization techniques do not utilize this property of data center networks for power proportionality. In our work [3], we exploit this opportunity and show that significant energy savings can be achieved via path consolidation. We present an offline formulation for the flow assignment in a data center network and develop an online algorithm by path consolidation for dynamic right-sizing of the network to save energy. We also bound the competitive ratio of the online algorithm in terms of the maximum path-length in the network. To validate our algorithm, we build a flow level simulator for a data center network. Our simulation on flow traces generated from MapReduce workload shows ~80% reduction in network energy consumption in data center networks and ~25% more energy savings compared to the previous method for saving energy in data center networks.


Smart Micro-Grid 

This paper explores the opportunity for energy saving in data centers using the flexibility from the Service Level Agreements (SLAs) and proposes a novel approach for scheduling workload that incorporates use of renewable energy sources. We investigate how much renewable power to store and how much workload to delay for increasing renewable usage while meeting latency constraints. We present an LP formulation for mitigating variability in renewable generation by dynamic deferral and give two online algorithms to determine optimal balance of workload deferral and power use. We prove the feasibility of the online algorithms and show that their worst case performances are bounded by constant factors with respect to the offline formulation. We validate our algorithms by trace-driven simulation on MapReduce workload and collected and publicly available wind and solar power generation data. Results show that the algorithms give 20-30% energy-savings compared to the naive `follow the workload' policy.


EV Integration to Smart Grid

Utilities face complex problems of peak demand and intermittent supply, made more pressing by the need to integrate large EV loads and distributed generation. The added flexibility of EV loads, which can charge at varying rates, together with forecasts of renewable availability can be used to reduce integration costs. We show that, in addition, the look ahead provided by requesting EVs to telegraph arrival times can be exploited to shave peaks. We propose a novel optimization theoretic approach to scheduling EV charging, that delays workload to minimize charging cost while meeting latency constraints. We present an online algorithm for dynamic deferral to determine a near optimal balance of workload delay and power use. We validate our algorithm on simulated EV workload, collected wind and solar power generation data from our micro-grid and publicly available electricity price traces from the grid. Results show that the algorithm gives 10-30% energy-savings compared to the naive 'follow the workload' policy.


References

  1. Muhammad Abdullah AdnanBalakrishnan Narayanaswamy and Rajesh Gupta, "Online Reservation and Deferral of EV Charging Tasks to Reduce Energy Use Variability in Smart Grids," Proc. of IEEE International Conference on Smart Grid Communications (SmartGridComm), Venice, Italy 2014.
  2. Muhammad Abdullah Adnan and Rajesh Gupta, "Workload Shaping to Mitigate Variability in Renewable Power Use by Data Centers," Proc. of IEEE International Conference on Cloud Computing (CLOUD), Alaska, USA, June 2014.
  3. Muhammad Abdullah Adnan and Rajesh Gupta, "Path Consolidation for Dynamic Right-sizing of Data Center Networks," Proc. of IEEE International Conference on Cloud Computing (CLOUD), Santa Clara, USA, June 2013. 
  4. Muhammad Abdullah Adnan, Ryo Sugihara, Yan Ma and Rajesh Gupta, "Energy-Optimized Dynamic Deferral of Workload for Capacity Provisioning in Data Centers", Proc. of International Green Computing Conference (IGCC), Arlington, VA, USA, June 2013.
  5. Muhammad Abdullah Adnan and Rajesh Gupta, "Utility-Aware Deferred Load Balancing in the Cloud Driven by Dynamic Pricing of Electricity," Proc. of ACM/IEEE Design Automation and Test in Europe (DATE), Grenoble, France, March 2013. [pdf]
  6. Muhammad Abdullah Adnan, Ryo Sugihara and Rajesh Gupta, "Energy Efficient Geographical Load Balancing via Dynamic Deferral of Workload," Proc. of IEEE International Conference on Cloud Computing (CLOUD), Honolulu, Hawaii, USA, June 2012. (Acceptance rate 48/282 = 17%) [pdf]