The CityLearn Challenge 2021 has now concluded! The winners and the leaderboard have been announced at the RLEM workshop:
Note: Non-author registration is free for the online workshop and BuildSys
About the challenge
The CityLearn Challenge is an opportunity for researchers from multi-disciplinary fields to investigate the potential of artificial intelligence and distributed control systems to tackle multiple problems within the energy domain. High peaks of electricity consumption in urban areas not only increase prices of electricity, but they are also cause grid-instability and even blackouts. Furthermore, increasing energy use and carbon emissions keep contributing to the problem of climate change.
The increase of distributed energy storage systems (i.e. batteries, EVs, water heaters) and of distributed energy generation resources (i.e. PV panels) constitutes an opportunity to better coordinate all these systems and achieve grid-interactive energy efficient connected buildings.
We introduce a multi-agent control challenge in which the participating teams must coordinate the energy consumed by each of the buildings within a simulated micro-grid. There are different objectives or evaluation metrics, which are described in the Scoring section.
What does the simulation look like?
The agents must control the energy stored by a micro-grid of 9 buildings in real-time for a period of 4 simulated years, on an hourly time-scale. Each building has up to 3 action-variables (domestic hot water storage, chilled water storage, and electrical storage). This is a coordination challenge, so most of objectives of this control problem depend on the joint actions of all the buildings combined.
In contrast to other traditional reinforcement learning problems, where there are multiple episodes, the CityLearn Challenge only has one episode of 4 years. Your agents will need to explore and exploit the best control policy they can learn within those 4 years.
Your team should design the agents based on Climate Zone 5, which contains the 4 years of data. Feel free to modify the building attributes that are located in the file data/Climate_Zone_5/building_attributes.json if you want. This file will not be submitted, but modifying it could give you a better understanding of the robustness of your control agents with respect to the baseline RBC controller. When we evaluate your submission, we will modify the building attributes in the building_attributes.json file.
First, you must have registered your team here. All the teams must submit at least 3 files (4 files if you submit both a RL and a non-RL agent):
agent.py and/or agent_non_rl.py
They all can be found in the folder submission_files. If you are submitting an agent that is not based on reinforcement learning, you can change the name of the file agent.py to agent_non_rl.py and submit it with the rest of your files. You can send both a agent.py and a agent_non_rl.py file.
How do we evaluate your submission?
To evaluate your submission we will add your files to the main directory of the CityLearn repository and run the file main.py to get your scores. Therefore, you must make sure that your files will work properly when we run the "main.py" file.
If you submit the file "agent_non_rl.py", we will import it within the main.py file (adding the line "from agent_non_rl import Agent" at the beginning of the main.py file). Just make sure your class "Agent" will run properly when we import it.
We will run the agents on a different set of 9 buildings than the one we provide in the Github repository, so make sure your agent can adapt to new environments and does not over-fit the one we provide.
Deadline & Where to Submit
We will be accepting submissions until August 8th, 11.59pm AoE. To submit, compress your submission files as a .zip file with the name of your team: your_team_name.zip, and submit it using the following link: https://utexas.app.box.com/f/8f782f418a7c4754b9b563c3394ea863
The agents submitted must be based on reinforcement learning, unless your team specifically submits a non-RL agent to the non-RL category.
Mixed control approaches that are based on RL, but that combine RL with some other type of controller (such as MPC, rule-based controller, etc), are valid and can be submitted as RL agents instead of as non-RL agents.
None of the agents you submit, or any of the classes in your files, is allowed to directly read the files that are in the directory "data"
The submission of pre-trained RL agents is not allowed
Your agent must take less than 10 hours to run in a free Google Colab notebook, using GPU acceleration, or a comparable MS Azure notebook.
The main file must be used as it is provided. This may make the use of RL libraries and packages more difficult, but will give you more flexibility to implement decentralized RL agents that can potentially share information with each other.
In order to receive a prize, your team must give us permission to make its solution to the challenge publicly available, and provide a list of all the authors for reference.
Any teams from The University of Texas at Austin, the University of Colorado Boulder, NREL, or Onboard Data cannot receive prize money or appear on the main leader board to avoid conflict of interest with the organizers. However, they can still submit their solutions and score, in which case we may create a separate table that shows their scores.
Vázquez-Canteli, J.R., Dey, S., Henze, G., and Nagy, Z., "MARLISA: Multi-Agent Reinforcement Learning with Iterative Sequential Action Selection for Load Shaping of Grid-Interactive Connected Buildings", BuildSys 20', Yokohama, Japan, 2020.
Vázquez-Canteli, J.R., Ulyanin, S., Kämpf J., and Nagy, Z., “Fusing TensorFlow with building energy simulation for intelligent energy management in smart cities”, Sustainable Cities and Society, 2018.
Thanks to Our Sponsors
The MIT License (MIT) Copyright (c) 2019, The University of Texas at Austin. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.