Matteo Pozzi

Professor, Civil and Environmental Engineering, Carnegie Mellon University

Long-Term Infrastructure Planning Under Uncertainty

Description

Stakeholders and owners of assets and infrastructure systems exposed to extreme events have to take decisions about risk control and mitigation, and assumptions about the extent of future climate and population change can significantly affect the optimization of these actions. Even after the agent has defined a set of scenarios with corresponding probabilities, she still has to assume whether and how, during the management process, she will learn about the likelihood of these models. An agent believing, optimistically, that she will soon learn perfectly which scenario is right may prefer to wait for this information before taking relevant mitigation decisions. On the other hand, an agent assuming pessimistically that no additional information will ever be available in the future may prefer to immediately take an action with long term consequences. The impact of the future “learning rate” about the evolutionary scenario on current optimal planning is not yet understood, and its analysis poses high modelling and computational challenges.

This project investigates how to trade-off the need of a prompt response and that of receiving sufficient information, and how to select an appropriate level of flexibility in asset design. We formulate the sequential learning and optimization scheme as a Hidden Model Markov Decision Process (HM-MDP) framework, including uncertainty in the system state evolution, and to develop scalable and efficient numerical algorithms based on point-based value iteration to solve the optimization process, both with discrete and with continuous models.

In our recent work, we describe an application of HM-MDP to flood-risk mitigation. Depending on what the homeowner believes about the house’s chance of flooding now and in the future, and the assumed availability of future information, it may be optimal to invest and elevate the house, to reduce the risk. The agent can select when to elevate the house, and by how much: we investigate how the decision should depend on assumptions about available information. The figure below reports the optimal policy for two alternative assumptions on available information. Elevation  is plotted against the belief about the models. Three models, representing no (m=1), low (m=2) and high (m=3) climate change are considered, and the triangle in the picture defines the domain of possible initial beliefs. When no future information is available, as in graph (a), the agent should act immediately, elevating the asset by a quantity that depends on belief. However, if observations are expected (b), the agent should do nothing and wait in a large domain of the belief region: only if the initial belief tends strongly toward low and/or high climate change is immediate action required.

Open loop initial policy (a), policy with expected significant information (b) a function of the belief.

Publications

[initial idea presented here] Pozzi, M., Memarzadeh, M., Klima., K. “Hidden-model processes for adaptive management under uncertain climate change,” ASCE’s Journal of Infrastructure Systems, 23(4): 04017022. doi: 10.1061/(ASCE)IS.1943-555X.0000376 (2017).

Li, S., Pozzi, M. “Predicting the Condition Evolution of Controlled Infrastructure Components Modeled by Markov Processes,” Proc. of the 13th International Conference on Applications of Statistics and Probability in Civil Engineering (ICASP13), Seoul, South Korea, May 26-30, 2019.

Memarzadeh, M., Pozzi, M., “Model-free reinforcement learning with model-based safe exploration: Modeling adaptive recovery process of infrastructure systems,” Structural Safety, 80:46-55 (Elsevier) https://doi.org/10.1002/stc.2329 (2019).