Using our original method Feature Concepts with Data Leaves for synergetic data integration, we found tendencies about the positive and negative effects of inter-community interactions: people tend to prefer to live in a society where people from across multiple communities meet, but infection tends to expand rapidly in such a region. This talk shows findings useful for creating a secure and innovative society to live and work in, where knowledge evolves and emerges via synergetic dynamics. In addition, I propose evolutionary computing as a paradigm to foresee or create knowledge that flies beyond being learned from data.
Department of Computer Science and Engineering, University of Bologna (ITALY)
In this lecture, we explore how Artificial Intelligence (AI) can help to tackle sustainability issues. We start by looking at how AI learns from data in the different steps of the AI algorithms' life cycle; then we analyze how AI can use data to help us make better choices for the planet and society (e.g., AI can optimize energy consumption in buildings or manage supply chains to reduce waste). By identifying patterns and trends, AI can suggest more sustainable practices and predict outcomes to prevent environmental damage. During the lecture, we will discuss the risks and challenges of using AI and data for sustainability. This includes environmental issues like the carbon footprint of AI computations. Next, we will analyze real examples of how AI is already making a difference in areas like managing resources, dealing with climate change. Finally, we will have a practical use case where participants can better understand how AI can be used for a greener world. This aims to increase awareness and encourage the development of more energy-efficient AI models. We will conclude with some key points to remember in using AI for a more sustainable future. These key points will summarize best practices, highlight the potential of AI to drive positive environmental change, and emphasize the need for ongoing research and collaboration in this field.
Large events in mega-cities require appropriate crowd flow control for safety and comfort. This is possible by sensing of crowd conditions, risk prediction, and crowd control, for which AI is extremely effective. We have developed this crowd management system and are actually operating it at various events. In the talk we will show some examples of this system, and discuss problems on overtourism.
The lecture will provide an introduction to automated planning, addressing how planning problems (models) are typically represented and solved. After briefly outlining the general context, the focus will shift to non-classical models—those not commonly covered in traditional artificial intelligence textbooks. Specifically, attention will be directed towards problems that involve metric information such as time and resources. The lecture will then move on to how to solve planning problems through the lens of forward state-based search. We will highlight one of the main challenges in this approach: deriving efficient and informed relaxation-based heuristics. As demonstrated in the 2023 edition of the last International Planning Competition, this is the state-of-the-art approach to solve planning problems. Finally, the lecture will outline examples of how planning can be used to address a variety of realistic problems, ranging from urban traffic congestion and personalized medication administration to the verification of cybersystems and applications in robotics.
The course introduces computational argumentation, an interdisciplinary research field whose main goals are the study of formal models of argumentative processes and the development of software applications based on these models. Computational argumentation aims to support knowledge representation and automated reasoning in a variety of contexts, characterized by the presence of uncertain, incomplete and typically conflicting information. These range from legal controversies to medical reasoning and from e-democracy to scientific debates, just to give some examples. Argumentation models the reasoning process in the context of disagreement as a production and exchange of arguments, where arguments are attempts to persuade someone or something by giving reasons for accepting a particular conclusion as evident. More specifically, it is possible to distinguish (at least) two levels in argumentation systems. The lower layer deals with how an argument is structured on the basis of its components in terms of the underlying knowledge, and how arguments are related on the basis of relations holding between their components at the language level, such as attack and support. The upper level deals with the evaluation of the status of arguments, which in turn allows to evaluate the acceptability of their conclusions. In this level, an abstract model is usually adopted which abstracts away the structure of arguments and the nature of their relations and, as such, can be applied to different argument structures at the lower level. The course covers both structured and abstract argumentation, with a special emphasis on Dung argumentation frameworks and their semantics: a simple, yet powerful, formalism to assess the acceptability of arguments based on the attacks between them.
A unifying concept that ties together many diverse aspects of today's Artificial Intelligence is the notion of intelligent software agent, namely a software component capable of reasoning, acting, interacting, and learning. This tutorial intrduces the notions of agents and multiagent systems (MAS) and overviews some of the key challenges and solutions proposed in the agent-oriented software engineering field, from programming multi-agent systems, to agent-based modeling ans simulation, and applications of agents and MAS.
In this talk, recent advancements of ai-based techniques for improving the diagnosis accuracy, the treatment personalization and the safety of surgical interventions will be presented. Example applications will be presented together with possible future possibilities offered by a widespread adoption of artificial intelligence in medicine.
Causal inference is an essential tool that allows to estimate the impact of a decision on an outcome of interest. While in theory it is possible to recover the true causal effect from data alone, the required assumptions make it virtually impossible in real-world applications. An alternative solution is to leverage both data and prior knowledge provided by experts to learn a causal graph representing underlying data generating model. This process is called causal discovery and enables scholars to explain the interplay between the observed variables. In this talk, we will deep dive in the causal discovery problem with an applied case study in endometrial cancer.
Deep learning techniques have gained significant adoption in healthcare, especially in assisting medical imaging diagnosis and computer-assisted surgery. These methods play a crucial role in identifying, segmenting, and classifying pathologies or medically significant features, thereby improving healthcare delivery and facilitating personalized medicine. In this presentation, we will provide a comprehensive overview of common deep learning methodologies used for analyzing both pre- and intra-operative images. Furthermore, we will highlight recent advancements, innovative solutions, and real-world applications in this field.
The advent of Large Language Models opened new perspectives concerning their usage within the digital health domain. However, their intrinsic probabilistic and unpredictable behavior needs the design of trustworthy strategies aiming to avoid the creation of hallucinations that, especially within the digital health domain, may lead to severe harm. Such an issue has been addressed with the adoption of Retrieval-Augmented Generation solutions, where the text generation task is supported by controlled knowledge injected into the prompts. Even if the hallucination issue is mitigated, the generation of certified information (such as trustworthy content granted by the system’s owner) requires more sophisticated strategies. In this talk, I present an approach where the classic Retrieval-Augmented Generation pipeline is enhanced with a further initial step where the Large Language Model is asked to generate a preliminary text used to query the repository of certified information for presenting the appropriate content to the final user.
Dynamical systems provide a methodologically sound framework for the design of novel neural architectures, characterized by provable robustness/stability and effectiveness of neural information propagation. Looking at neural networks from a dynamical systems perspective also allow to design models which are more efficient and amenable for implementations outside of classical Von Neuman computing architectures, e.g. in Neuromorphic hardware. In this talk, we will overview this lively research topic and discuss how dynamical systems can become building blocks for the next generation of efficient, sustainable and robust neural networks (for sequential and graph data).
Zero-shot learning is the ability of a machine learning model to recognize semantic concepts unseen during training. While over the years this field evolved by making the setting more and more challenging, a principle remained intact: to recognize unseen classes, we need to ground the (visual) input to a semantic description rather than the class itself. This semantic description, in the form of language, is the key to generalization. Nowadays, with the advent of large multimodal models (LMM), the concept of "unseen" is brittle and hard to define. Nevertheless, can this principle still be useful for semantic generalization? In this lecture, we will discuss how this is the case. We will first introduce zero-shot learning and the fundamental approaches to address this task. We will then discuss LMMs and their capabilities. Finally, we will see how the principle of grounding visual information to language can be used to re-purpose LMMs to remove assumptions (e.g., in classification) and address tasks beyond the ones they were designed for (e.g., anomaly detection, and compositional recognition) without requiring any training. We will conclude with a discussion of the pros and cons of this paradigm as well as promising future directions.
Backpropagation is the de facto standard algorithm for training deep neural networks. Due to its effectiveness on a wide range of applications, backpropagation has been implemented and can be easily run by all deep learning library, like Pytorch and Jax. However, backpropagation is no panacea. After a brief introduction to backpropagation, we will discuss its limitations, especially those that sometimes totally prevent its usage. We will then work on removing such limitations by introducing alternative training algorithms for deep networks. The hands-on session will challenge the participants to implement a backpropagation-free learning algorithm from scratch in Python and to assess its effectiveness with respect to backpropagation. Will someone be able to beat it?
AI research concentrated on Natural Language Understanding has recently developed impressive results related to the adoption of Generative pre-Trained Large Language Models as core systems for a variety of NLP applications, ranging from question answering and dialogue to intelligent data management, multimedia information processing and knowledge integration. The paradigms of Deep Learning underlying LLMs have been intensively studied and developed within the realms of NL semantics and structured learning technologies. The emerging abilities of LLMs are becoming a focus of research on AI for a number of reasons. First they seem to bridge a still persisting gap between language understanding and knowledge with significant perspectives on automated reasoning in different AI areas and applications. Second, they seem to suggest that the emerging abilities support core functional aspects of human intelligence such as commonsense reasoning, argumentation or planning.
Europe, and Italy in particular, is on the brink of a demographic revolution unprecedented in human history: a stable and permanent ageing of the population, a new structural equilibrium fraught with crises and unknown opportunities. Ageing will affect every aspect of our societies: medicine and human biology, the economic system and the labour market, care and the welfare state, the European cultural and political system. The challenge for our communities today is to seize the opportunities in advance by overcoming or skilfully avoiding crises and navigating uncharted waters. All the classical quantitative demographic tools, both descriptive and predictive, based on both sample and population data, may not be up to such a complex challenge. Indeed, populations, like ideas, are not born in an atmospheric vacuum, but are constantly and reciprocally influenced by complex relationships with the environment in which they live, be it physical, economic, social or political. However, thanks to technological advances, we also have new tools that can amplify human power and intellect to unprecedented levels and capture these complex relationships with greater ease and speed. The models for analysing data and predicting phenomena that artificial intelligence offers researchers are some of these tools: geographical matrices of the real distances between population centres and basic services such as schooling, health and transport; the redefinition of the administrative boundaries of public intervention; the optimisation of the planning of service delivery centres; the creation and validation of indicators for planning and monitoring public policies. This presentation will highlight the limitations of some demographic-administrative tools currently used for planning public interventions with a demographic objective (e.g. Aree Interne SNAI-ISTAT, ICCP-Indici di Criticità Comunale Potenziale) and the potential that AI offers to improve or overcome them.
National Research Council
Institute of Cognitive Sciences and Technologies (Rome, ITALY)
The deployment of Artificial Intelligence (AI) and Robotics in real-world settings presents unique challenges beyond the technical aspects of design and development. To ensure these technologies are effective, it is essential to adopt a User-Centered approach throughout the entire process. By prioritizing the needs and experiences of end users, this approach can significantly enhance user acceptance and ensure that the technology aligns with their real-world requirements. Achieving this goal demands a multidisciplinary effort, incorporating insights from various fields and engaging all relevant stakeholders from the early stages to final implementation. By delving into past and present experiences, and highlighting the valuable lessons learned from previous endeavors, the critical role that continuous user involvement plays in shaping innovative technologies will be emphatized, ultimately fostering greater collaboration between humans and intelligent systems. By doing so, the pathway to creating solutions that are not only functional but also embraced by their users becomes clearer, ensuring that AI and robotics are seamlessly integrated into society.
Nowadays wearable devices and environmental sensors allow us to easily collect data related to subjects' emotions and behaviour. This data collection is particularly suited when older people are under investigation, being unobtrusive and relatively inexpensive. Advanced human system interfaces can benefit from this data, taking into account the emotions involved in the interaction. The affective computing cycle that is the basis of these applications will be presented, highlighting its limits and potentials. The same kind of data can be extremely useful also in applications that do not imply active subject interactions, such as remote monitoring, especially in case of older and frail people. The role of data quality, inter and intra subject variability, as well as usability of the adopted technology will be discussed to provide hints to evaluate the reliability of personalised AI models in case of older people. Case studies will be presented and critically discussed.