Events

Special Session @ WIRN 2024

Explainable AI for Biomedical Images and Signals

Special Session of the 32nd Italian Workshop on Neural Networks WIRN 2024

Aim & Scope of the Special Session

AI models have demonstrated remarkable success in the healthcare domain, spanning from medical imaging to signal
processing. Nevertheless, as the complexity of these models grows it becomes increasingly challenging to understand
and trust them. To effectively deploy these models in real-world medical scenarios, there is a critical need for
transparency and understanding of the underlying processes and insights learned by these black-box algorithms. This
necessity has led to a wave of research that aims at developing methods and techniques that enhance the explainability
of deep learning methods tailored to the healthcare domain. This session seeks to bring together experts in the field of
biomedical images and signals processing and XAI in order to showcase the latest advancements in the intersection of
these research areas to demystify AI decision-making processes in healthcare.

Topics

Potential topics include, but are not limited, to: 


Special Session @ ICassp 2024

ICASSP 2024 Special Session on "Generative Semantic Communication: How Generative Models Enhance Semantic Communications"

Organizers: Eleonora Grassucci, Yuki Mitsufuji, Ping Zhang

Abstract
Semantic communication is poised to become a fundamental aspect of future AI-driven communications. It holds the potential to regenerate images or videos with semantic equivalence to the transmitted ones at the receiving end, without solely relying on retrieving the exact sequence of bits that were transmitted. However, existing solutions are yet to develop the capacity to construct elaborate scenes based on the partial information received. Undoubtedly, there is a pressing need to strike a balance between the effectiveness of generation methods and the complexity of transmitted information, while potentially considering the goal of communication. To this end, deep generative models (DGM) such as diffusion and score-based models have been starting to show great potential in semantic communication frameworks, revealing the ability to generate semantically consistent content at the receiver side. Indeed, DGMs are extremely powerful in solving image and audio inverse problems and in generating multimedia content even from information that has been heavily degraded from the transmission channel. Therefore, due to such abilities, DGMs can significantly enhance next-generation semantic communication frameworks.

This special session aims to bring together leading researchers in the fields of generative modeling and wireless communication to provide advances in generative learning methods in semantic communication that can empower science and technology for humankind.


Location
ICASSP 2024, Seoul, Korea, 14-19 April 2024


More info

https://sites.google.com/uniroma1.it/icassp2024-special-session/home-page


IEEE Machine Learning for signal processing 2023

We are really thrilled and honored to announce that the ISPAMM Lab is involved in the organization of the 33rd IEEE International Workshop on Machine Learning for Signal Processing (MLSP 2023) that will be held in Rome, Italy, 17-20 September 2023. On behalf of the whole Organizing Committee, we would like to warmly invite you to participate in the IEEE MLSP 2023!

Please visit the website for all the information: https://2023.ieeemlsp.org

Special Session @ IJCNN 2022

Theme and Scope

In the last decade, deep learning has revolutionized the research fields of audio and speech signal processing, and acoustic scene analysis. In these research fields, methods relying on deep learning have achieved remarkable performance in various applications and tasks, surpassing legacy methods that rely on the independent usage of signal processing operations and machine learning algorithms. The huge success of deep learning methods relies on their ability to learn representations from audio and speech signals that are useful for various downstream tasks.

The typical methodology adopted in these tasks consists, in fact, in extracting and manipulating useful information from the audio and speech streams to pilot the execution of automatized services. In addition, the importance of obtaining reliable performance by using data recorded in real acoustic ambient, where several unpredictable and corruptive causes (like background noise, reverberation, multiple interferences, and so on) always worsen the algorithm behavior, is a challenge of fundamental importance.

It is indeed of great interest for the scientific community to understand the effectiveness of novel computational algorithms for audio and speech processing operating in these environmental conditions, in the light of all aforementioned aspects, able to enhance the quality of the recorded signals in order to successfully fulfill specific tasks, like machine listening, automatic diarization, auditory scene analysis, music information retrieval, and many others. Moreover, recent advancements in exploitation of Deep Learning models, in order to directly handle the acoustic raw data, make use of cross-domain approaches, to exploit the information contained in diverse kinds of environmental audio signals.

The aim of this session is therefore to provide the most recent advances in Deep Learning for audio and speech enhancement with a wide range of processing tasks and applications in real acoustic environments.

 

Topics

Potential topics include, but are not limited, to:

 

Important Dates

·       Paper submission: January 31, 2022

·       Decision notifications: April 26, 2022

·       Camera-ready papers: May 23, 2022

 

Submission

Manuscripts intended for the special session should be submitted via the paper submission website of WCCI 2022 as regular submissions. All papers submitted to special sessions will be subject to the same peer-review review procedure as the regular papers. Accepted papers will be part of the regular conference proceedings.

Paper submission guidelines: https://wcci2022.org/submission/


Special issue "Artificial Intelligence Based Audio Signal Processing" on SENSORS

Guest Editors

Prof. Dr. Michele Scarpiniti, Sapienza University of Rome, Italy

Prof. Dr. Jen-Tzung Chien, National Yang Ming Chiao Tung University, Taiwan

Prof. Dr. Stefano Squartini, Università Politecnica delle Marche, Italy


Web site: https://www.mdpi.com/journal/sensors/special_issues/Intelligence_Audio_Signal_Processing


Aims and topics

Nowadays, artificial intelligence is largely used to face complex modelling, prediction, and recognition tasks in different research fields. The application of artificial intelligence methods to audio sensors has encountered a big interest in the scientific community in the last decade, with a wide diversification of research topics in relationship with the nature of the “microphone” sensors under study (i.e. music, speech, sound). The focus is on suitably processing the audio streams, often acquired in presence of harsh acoustic conditions, to extract the information contained therein to create and control knowledgeable services. More recently, the exploitation of end-to-end computational models, to directly handle the acoustic raw data, and the employment of cross-domain approaches, to jointly utilize the information contained in heterogeneous audio sensors, have been widely used on purpose. The aim of this special issue is therefore to provide the most recent advances on the application of novel artificial intelligence algorithms to a wide range of audio sensing and processing tasks in real acoustic environments.

Potential topics include, but are not limited, to:

 

Submission information

The final deadline is 1 May 2022, but authors can submit manuscript at any time before the deadline. The paper will be published after the acceptance.

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form.

Special Session @ EUSIPCO 2020

Deep Machine Listening


Special Session at

28th EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO) 2020

24 – 28 August, 2020 – Amsterdam, The Netherlands

http://www.eusipco2020.org/


Theme and Scope

Nowadays, computational algorithms are largely used to face complex modelling, prediction, and recognition tasks in different research fields. One of these fields is represented by Machine Listening, consisting in audio understanding by machine, which finds applications in communications, entertainment, security, forensics, psychology and health to name but a few. Moreover, Machine Listening can get great benefits from the recent interest and great success of Deep Learning techniques.

The typical methodology adopted in these tasks consists in extracting and manipulating useful information from the audio stream to pilot the execution of automatized services. Such an approach is applied to different kinds of audio signals, from music to speech, from sound to acoustic data. In addition, the importance of obtaining reliable performance by using data recorded in real acoustic ambient, where several unpredictable and corruptive causes (like background noise, reverberation, multiple interferences, and so on) always worsen the algorithm behavior, is a challenge of fundamental importance.

It is indeed of great interest for the scientific community to understand the effectiveness of novel computational algorithms for audio processing operating in these environmental conditions, in the light of all aforementioned aspects. Moreover, the exploitation of end-to-end computational models, to directly handle the acoustic raw data, and of cross-domain approaches, to exploit the information contained in diverse kinds of environmental audio signals, have been recently investigated. The aim of this session is therefore to provide the most recent advances in Deep Machine Listening with a wide range of audio processing tasks and applications in real acoustic environments.

 

Topics

Potential topics include, but are not limited, to:

 

Important Dates

·       Paper submission: February 21, 2020

·       Decision notifications: May 29, 2020

·       Camera-ready papers: June 12, 2020

Special Session @ EUSIPCO 2019

Machine Audition: recent advances and applications


Special Session at

27th EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO) 2019

2 – 6 September, 2019 – A Coruña, Spain

http://www.eusipco2019.org/


Theme and Scope

Nowadays, computational algorithms are largely used to face complex modelling, prediction, and recognition tasks in different research fields. One of these fields is represented by Machine Audition, consisting in audio understanding by machine, which finds applications in communications, entertainment, security, forensics, psychology and health to name but a few.

The typical methodology adopted in these tasks consists in extracting and manipulating useful information from the audio stream to pilot the execution of automatized services. Such an approach is applied to different kinds of audio signals, from music to speech, from sound to acoustic data. In addition, the importance of obtaining reliable performance by using data recorded in real acoustic ambient, where several unpredictable and corruptive causes (like background noise, reverberation, multiple interferences, and so on) always worsen the algorithm behavior, is a challenge of fundamental importance.

It is indeed of great interest for the scientific community to understand the effectiveness of novel computational algorithms for audio processing operating in these environmental conditions, in the light of all aforementioned aspects. Moreover, cross-domain approaches to exploit the information contained in diverse kinds of environmental audio signals have been recently investigated. The aim of this session is therefore to provide the most recent advances in Machine Audition with a wide range of audio processing tasks and applications in real acoustic environments.

 

Topics

Potential topics include, but are not limited, to:

 

Important Dates

·       Tentative Title submission: December 08, 2018

·       Paper submission: February 18, 2019

·       Decision notifications: May 17, 2019

·       Camera-ready papers: May 31, 2019

Special Session @ WIRN 2019

Advanced Smart Multimodal Data Processing

Special Session at

Italian Workshop on Neural Networks (WIRN) 2019

June 12-14, 2019  Vietri sul Mare, Salerno, Italy

http://www.vitoantoniobevilacqua.it/siren/wirn-2019/


 

Theme and Scope

Multimodal systems offer a flexible, efficient and usable environment allowing users to interact through input modalities. Multimodal signal processing is an important research and development field, in multimodal systems sector, that processes signals and combines

information from a variety of modalities (e.g., speech, language, text) which significantly enhance the understanding, modelling, and

performance of human-computer interaction devices or systems enhancing human-human communication. Moreover, multimodal human-computer interaction enables a more free and natural communication, interfacing users with automated systems in both input and output. The aim of the special session is to host original papers and reviews on recent research advances and the state-of-the-art methods in the fields of Soft Computing, Machine Learning and Data Mining methodologies concerning with the processing of multimodal data in order to highlight about systems and data processing tools.


Special Session @ EUSIPCO 2018

Computational Audio Processing in Real Acoustic Environments


Special Session at

26th EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO) 2018

3 – 7 September, 2018 – Rome, Italy

http://www.eusipco2018.org/


Theme and Scope

Nowadays, computational algorithms are largely used to face complex modelling, prediction, and recognition tasks in different research fields. One of these fields is represented by digital audio processing, which finds applications in communications, entertainment, security, forensics and health to name but a few.

The typical methodology adopted in these tasks consists in extracting and manipulating useful information from the audio stream to pilot the execution of automatized services. Such an approach is applied to different kinds of audio signals, from music to speech, from sound to acoustic data. In addition, the importance of obtaining reliable performance by using data recorded in real acoustic ambient, where several unpredictable and corruptive causes (like background noise, reverberation, multiple interferences, and so on) always worsen the algorithm behavior, is a challenge of fundamental importance.

It is indeed of great interest for the scientific community to understand the effectiveness of novel computational algorithms for audio processing operating in these environmental conditions, in the light of all aforementioned aspects. Moreover, cross-domain approaches to exploit the information contained in diverse kinds of environmental audio signals have been recently investigated. The aim of this session is therefore to focus on the most recent advances and their applicability to a wide range of audio processing tasks in real acoustic environments.

 

Topics

Potential topics include, but are not limited, to:

 

Important Dates

·       Paper submission: February 28, 2018 (Extended)

·       Decision notifications: May 18, 2018

·       Camera-ready papers: June 18, 2018

Special Session @ EUSIPCO 2017

Computational Methods for Audio Analysis


Special Session at

25th EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO) 2017

28 August – 2 September, 2017 – Kos Island, Greece

http://www.eusipco2017.org/


Theme and Scope

Nowadays, computational methods are largely used to face complex modelling, prediction, and recognition tasks in different research fields. One of these fields is represented by the analysis of audio signals, which finds applications in communications, entertainment, security, forensics and health to name but a few.

The typical methodology adopted in these tasks consists in extracting and manipulating useful information from the audio stream to pilot the execution of automatized services. Such an approach is applied to different kinds of audio signals, from music to speech, from sound to acoustic data. The use of computational methods may also allow to provide a characterization of an audio stream by directly analyzing raw data. Moreover, cross-domain approaches to exploit the information contained in diverse kinds of environmental audio signals have been recently investigated.

It is indeed of great interest for the scientific community to understand the effectiveness of novel computational methods for audio analysis, in the light of all aforementioned aspects. The aim of this session is therefore to focus on the most recent advancements and their applicability to a wide range of audio analysis tasks.

 

Topics

Potential topics include, but are not limited, to:

 

Important Dates

·       Paper submission: March 05, 2017 (Extended)

·       Decision notifications: May 25, 2017

·       Camera-ready papers: June 17, 2017

Special Session @ MLSP 2016

Computational Methods for Audio Analysis


Special Session at

IEEE International Workshop on MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP) 2016

September 13-16, 2016  Vietri sul Mare, Salerno, Italy

http://mlsp2016.conwiz.dk/home.htm


Theme and Scope

Nowadays, computational methods are largely used to face complex modelling, prediction, and recognition tasks in different research fields. One of these fields is represented by the analysis of audio signals, which finds application in communication, entertainment, security, forensics and health to name but a few.

The typical methodology adopted in these tasks consists in extracting and manipulating useful information from the audio stream to pilot the execution of automatized services. Such an approach is applied to different kinds of audio signals, from music to speech, from sound to acoustic data. The use of computational methods may also allow to provide a characterization of an audio stream by directly analyzing raw data. Moreover, cross-domain approaches to exploit the information contained in diverse kinds of environmental audio signals have been recently investigated.

It is indeed of great interest for the scientific community to understand the effectiveness of novel computational methods for audio analysis, in the light of all aforementioned aspects. The aim of this session is therefore to focus on the most recent advancements in the Machine Learning field and their applicability to a wide range of audio analysis tasks.

 

Topics

Potential topics include, but are not limited to:


 Important Dates

·       Paper submission: May 1, 2016

·       Decision notifications: June 5, 2016

·       Camera-ready papers: July 31, 2016

Special Session @ WIRN 2012

Smart Grids: new frontiers and challenges

Special Session at

22nd Italian Workshop on Neural Networks (WIRN) 2012

May 17-19, 2012  Vietri sul Mare, Salerno, Italy

http://www.associazionesiren.org/initiatives/default.asp

 

Theme and Scope

As the world population increases, the sustainable usage of natural resources becomes an issue that humanity and technology are urgently asked to face. Energy represents a relevant example from this perspective and the strong demand coming from developed and developing countries shoved the scientists worldwide to intensify their studies on renewable energy supplies (like sun, wind and sea).

At the same time, due to the increasing complexity of MV and LV distribution grids on which distributed electrical generators based on renewables have to be included, a growing interest has been oriented to the development of smart systems able to optimally manage the usage and the distribution of energy among the population with the objective of minimizing wasting and the economic impact even at family consumption level. This yielded in a flourishing scientific literature on sophisticated algorithms and systems aimed at introducing intelligence within the energy grid, with also several effective solutions already available in the market.

The   task   is   surely   challenging   and   multi-­‐faceted.   Indeed   the   different   needs   of   the heterogeneous grid costumers and the different peculiarities of energy sources to be included in the grid itself have to be taken into account. Moreover several ways of intervention are feasible, as the ones indicated in the US Energy Independence and Security Act   of   2007   as   reference:   self-­‐healing   capability,   fault-­‐tolerance   on   resisting   attack, integration of all energy generation and storage, dynamic optimization of grid operation and resources    with full cyber-­‐security, incorporation of    demand-­‐response,    demand-­‐side resources and energy-­‐efficient resources, actively client participation in the grid operations by providing timely information and control options, improvement of reliability, power quality, security and efficiency of the electricity infrastructure.

A multi-­‐disciplinary coordinated action is required to the scientific communities operating in the electrical and electronic engineering, computational intelligence, digital signal processing and telecommunications research fields to provide adequate technological solutions to these issues having in mind the more and more stringent constraints we have to consider in terms of environment sustainability. Thus, the organizers of this Special Session wants to explore the new frontiers and challenges in Smart Grid research and propose a proficient discussion table for scientists joining the WIRN conference, whose expertise typically cover all aforementioned fields.

 

Topics

Potential topics include, but are not limited to:

 

Important Dates

·       Paper submission: March 15, 2012

·       Decision notifications: April 20, 2012

·       Camera-ready papers: May 17, 2012