South Asia faces repeated natural disasters, especially in regions like Sri Lanka which are prone to intense monsoon rains and landslides. In June and October 2024, severe floods and landslides disrupted life across the country, revealing critical flaws in the disaster management ecosystem including fragmented reporting, manual triage delays, and inefficient resource deployment.
To address these issues, ResQConnect was developed as a real-time disaster response platform powered by Artificial Intelligence. Unlike traditional systems, ResQConnect introduces a human-centered, AI-driven coordination framework that processes multimodal inputs (text, images, and voice), uses intelligent multi-agent workflows for triage and task allocation, and embeds human oversight into every decision. It is specifically tailored for high-stakes environments like floods and landslides, enabling timely, transparent, and effective disaster interventions.
The system’s novelty lies in combining multimodal context-aware analysis, Retrieval-Augmented Generation (RAG), and an AI agent pipeline with human-in-the-loop governance to enhance accuracy and accountability. This approach ensures ResQConnect not only automates decisions but also supports and augments human judgment when lives are at stake.
Literature review on related areas
Conventional disaster management systems are frequently hindered by entrenched issues, primarily fragmented communication, delayed coordination, and a lack of adaptability. Fragmented communication emerges as a structural flaw in many national disaster recovery frameworks. In the United States, disaster response is distributed across more than 30 federal agencies, each operating with different mandates and limited data-sharing capabilities. As a result, this complex bureaucratic landscape hinders state and local authorities, as seen during the aftermath of Hurricane Sandy, where overlapping agency responsibilities and conflicting protocols severely impeded coordinated relief efforts.
Additionally, the rigidity of conventional systems makes them ill-suited to dynamic disaster scenarios. For example, centralized communication infrastructures are highly vulnerable to disruption during earthquakes or hurricanes, impairing the rapid flow of critical information. Moreover, the reliance on pre-defined protocols and top-down control inhibits real-time adaptation to evolving threats, resource needs, or environmental changes. This lack of flexibility contributes to suboptimal outcomes and delays in deploying life-saving measures. Taken together, these limitations underline the urgent need for systems capable of decentralized coordination, real-time learning, and situational adaptability to respond effectively to the unpredictable nature of natural disasters.
Multi-Agent Systems (MAS) offer a decentralized and collaborative alternative to traditional hierarchical models. MAS architectures consist of autonomous software or hardware agents (e.g., drones, sensors) that coordinate via shared protocols to perform tasks such as information gathering, resource allocation, and decision-making. For instance, MASDM introduces agent swarms for hospitals, aid centers, forces, and information providers, enabling distributed control and reducing system fragility. Likewise, the AID framework enhances human-system interaction by supporting the dynamic dispatch of ambulances and broadcasting emergency contact data to the public. Furthermore, MAS’s scalability and modularity allow the dynamic integration of agents, such as new sensors or response teams, which enhances resilience even during infrastructure failures. DyLAN, an advanced MAS framework, improves coordination by adapting interaction strategies in real time and modularizing system components.
Robust disaster response relies heavily on accurate and timely situational intelligence. However, unimodal systems those relying on a single data source are often limited under adverse conditions such as fog, smoke, or visual obstructions. To address this limitation, modern systems now leverage multimodal context-aware pipelines that integrate textual, visual, and sensor data to construct a comprehensive, real-time operational picture.
Notably, advanced models like Multimodal Large Language Models (MLLMs) and Vision-Language Models (VLMs) have demonstrated strong capabilities in aligning visual and textual information for zero-shot disaster adaptation. For example, the CUE-M framework refines image context, processes user intent, and filters content through robust classifiers, outperforming baseline MLLMs in both semantic search and safety assurance. Additionally, systems like CSDNet enhance aerial image segmentation, improving the identification of disaster features even in small, underrepresented regions. Overall, this multimodal processing pipeline significantly reduces blind spots, enhances accuracy, and enables more informed decision-making in volatile environments.
The evolution of decision support systems (DSS) through AI and machine learning represents a paradigm shift from reactive to proactive disaster management. While traditional systems were often static, modern DSS now integrate geospatial analytics, agent-based modeling, and deep learning to enable data-driven predictions. In particular, Reinforcement Learning (RL) enables adaptive, real-time strategy refinement. Moreover, hybrid models that combine RL with Bayesian inference offer greater robustness under uncertainty. Further, decision-making processes can benefit from multi-objective optimization, which allows stakeholders to balance trade-offs such as cost, equity, and response speed. However, the deployment of AI-driven DSS is not without challenges. These include issues such as data bias, lack of model transparency, and ethical concerns. Consequently, incorporating Explainable AI (XAI) is increasingly recognized as essential to foster trust, ensure fairness, and support responsible decision-making, particularly for vulnerable communities that are disproportionately affected by disasters
Autonomous task orchestration facilitates real-time action planning and execution by multi-agent and multi-robot systems. This is especially crucial in fast-evolving situations such as urban disasters or firefighting, where human coordination becomes infeasible at scale. To support this capability, systems like GenSwarm leverage Large Language Models (LLMs) to generate and deploy robot task policies from natural language inputs, achieving zero-shot adaptability and white-box interpretability. At a more abstract level, the ARC framework integrates LLMs for high-level strategic planning and reinforcement learners for low-level task execution. This creates a hierarchical architecture that enables continual learning, dynamic reconfiguration, and long-term adaptability. Additionally, related frameworks such as SMART-LLM support multi-stage task planning by decomposing complex objectives into sequential, actionable subtasks. Moreover, modular approaches like CoELA enhance system flexibility by embedding LLMs into agent components such as memory, communication, and planning.
However, such systems are not without risks LLM hallucinations and cross-domain message vulnerabilities raise concerns about reliability and security. Therefore, rigorous validation methods and secure, interpretable policy mechanisms are imperative for safe deployment in high-stakes disaster scenarios.
Effective disaster response depends heavily on smart resource matching getting the right aid to the right place at the right time. However, conventional methods often fall short in this regard, leading to preventable fatalities. To address these challenges, researchers have developed multi-objective evolutionary algorithms for optimizing emergency resource allocation, considering factors like temporary unit placement, ambulance routing, and cost efficiency. Notably, systems like MARL-HC have drastically reduced decision-making times from three minutes to just 0.22 seconds and improved ambulance response times by 5–13 seconds marginal yet critical gains when lives are at stake.
In parallel, social media has emerged as a real-time sensor network, providing situational data that enhances predictive analytics and early resource allocation. Furthermore, Large Language Models (LLMs) strengthen this capability by semantically matching rescue requests with available resources based on contextual understanding. For instance, frameworks like DisasterResponseGPT can generate actionable plans of action in seconds, significantly improving agility in resource deployment. Additionally, tools such as Text2Reward convert unstructured environmental inputs into executable reward functions, enabling reinforcement learning systems to adapt and optimize disaster responses in real time .