2023

"News waves" by kevin dooley is licensed with CC BY 2.0

Archive of News from 2023

E' stato istituito dal Ministero della Salute il Tavolo Tecnico per lo studio delle criticità emergenti dall'attuazione del Regolamento dell'assistenza ospedaliera e dell'attuazione del Regolamento dell'assistenza territoriale.


Ebbene chi partecipa?

#18 rappresentanti - a vario titolo - della sanità, tutti esclusivamente #uomini (vedi immagine).


Provo a taggare chi di loro è qui su LinkedIN:

Marco Mattei, Domenico Mantoan, Francesco Enrichens, Anselmo Campagna, Francesco Saverio Mennini,PhD, Fabio De Iaco, Diego Foschi, dario manfellotto, Magi Antonio, Americo Cicchetti, Vito D'Andrea, Silvestro Scotti (mi scuso per eventuali imprecisioni).


Come scrive Simonetta Molinaro: "[...] Credo che debbano essere gli esperti chiamati al tavolo a chiedere che venga rimodulato in un'ottica di parità di genere. Il problema non sarà certo trovare professioniste competenti [...]"

Affermazione avallata da un notevole numero di "esperte in sanità" e, in primis, da Donne Protagoniste in Sanità | Community della D.ssa Monica Calamai.


Sul web non trovo articoli a denunciare questa situazione di fatto.

Solo l'ex ministra #Beatrice #Lorenzin ha rilasciato una dichiarazione:

"Nulla di personale nei confronti dei componenti del Tavolo istituito dal Ministero della Salute per la revisione degli standard di ospedali e territorio, ma possibile che in tutta Italia non ci sia neanche una donna che possa farne parte? DG, presidenti di società scientifiche, accademiche, medici del territorio, esperte della materia. Proprio neanche una?

Questo lungo elenco di nomi solo maschili stride proprio con la fotografia del mondo salute: in maggioranza netta fatto dalle donne. Tutto questo è umiliante per le tantissime professioniste del SSN. Spero che il governo cambi metodo mostrando volontà e maggiore sensibilità nell'indicare anche le donne nei tavoli decisionali. Il mondo salute è un mondo sempre più al femminile ed è strabiliante che poi il tavoli del “potere” siano tutti maschili!".

This report distils technical and organisational lessons learned from the scientific work of the JRC that can inform the scoping and implementation of common European data spaces as envisioned by the European Strategy for data. Those lessons stem from JRC’s long term commitment and relevant research. The audience targeted by this report comprises policy officers and domain experts that are currently working on or expect to work on the scoping and implementation of common European data spaces. The report introduces the policy context, and positions JRC’s evidence as an instrument for translating policy conceptualisations and theoretical understandings of data spaces into a practical framework for implementation. This is followed by a synthesis of the requirements for common European data spaces derived from relevant policy documents. A set of data sharing how-to guides is provided in addition to a description of a dedicated knowledge base that corresponds to the functional and non-functional requirements of data spaces. In the investigation of the technical and governance aspects of data spaces, there is no single approach that can be applied for their setup. Therefore, a community-based approach through co-creation of data spaces that considers the domain-specific context is the only feasible way forward that would ensure buy-in by a broad spectrum of stakeholders. Finally, scientific evidence should continue to play a central role in the conceptualisation and implementation of European data spaces.

If managers assume a normal or near-normal distribution of Information Technology (IT) project cost overruns, as is common, and cost overruns can be shown to follow a power-law distribution, managers may be unwittingly exposing their organizations to extreme risk by severely underestimating the probability of large cost overruns. In this research, we collect and analyze a large sample comprised of 5,392 IT projects to empirically examine the probability distribution of IT project cost overruns. Further, we propose and examine a mechanism that can explain such a distribution. Our results reveal that IT projects are far riskier in terms of cost than normally assumed by decision makers and scholars. Specifically, we found that IT project cost overruns follow a power-law distribution in which there are a large number of projects with relatively small overruns and a fat tail that includes a smaller number of projects with extreme overruns. A possible generative mechanism for the identified power-law distribution is found in interdependencies among technological components in IT systems. We propose and demonstrate, through computer simulation, that a problem in a single technological component can lead to chain reactions in which other interdependent components are affected, causing substantial overruns. What the power law tells us is that extreme IT project cost overruns will occur and that the prevalence of these will be grossly underestimated if managers assume that overruns follow a normal or near-normal distribution. This underscores the importance of realistically assessing and mitigating the cost risk of new IT projects up front. 

Alla luce della crescente importanza della Modellazione e Simulazione (M&S), appare fondamentale una migliore diffusione delle pratiche e degli strumenti di M&S. La modellazione e la simulazione sono molto apprezzate nella scoperta scientifica perché forniscono approfondimenti aggiuntivi che spesso sono poco pratici da cogliere attraverso la sola analisi sperimentale e teorica del mondo reale. Una chiara comprensione dei vantaggi che queste tecnologie apportano all'ecosistema della ricerca aiuterà gli scienziati a creare valore aggiunto per le aziende che ricorrono a una ricerca e sviluppo basata su software.

L'M&S computazionale è una risorsa inestimabile nel settore sanitario. Tuttavia, la necessità di know-how avanzato e risorse computazionali ne limita l'adozione principalmente alle grandi aziende biotecnologiche e farmaceutiche. Rendere M&S disponibile a un ampio spettro di potenziali utenti (dispositivi medici e aziende farmaceutiche, ospedali, istituzioni sanitarie) richiederebbe un accesso facile e controllato alle risorse M&S in un ambiente sicuro.

Molti scienziati parlano della necessità di Good Simulation Practices (GSP), come regole per la modellazione e la simulazione applicate alle industrie farmaceutiche e dei dispositivi medici. C'è un forte bisogno oggi di avere regole chiaramente definite e regolamenti accettati; Gli SPG acquisiranno presto un ruolo essenziale. Nei prossimi anni la ricerca scientifica sulla medicina in silico dovrà diventare sempre meno speculativa, e sempre più traslazionale, cioè immediatamente applicabile e utile per applicazioni cliniche e industriali.

Infine, l'identificazione di buone pratiche di simulazione è particolarmente importante per la competitività dell'ecosistema delle scienze della vita in Europa, per evitare che l'industria negli Stati Uniti e in altre parti del mondo abbia un vantaggio competitivo significativo che renda più veloci le sperimentazioni cliniche.

Check out the slides from "Scaling Radiology to Serve a Billion People" by Prodigi's @Bargava

Large language models (LLMs) represent a major advance in artificial intelligence (AI) research. However, the widespread use of LLMs is also coupled with significant ethical and social challenges. Previous research has pointed towards auditing as a promising governance mechanism to help ensure that AI systems are designed and deployed in ways that are ethical, legal, and technically robust. However, existing auditing procedures fail to address the governance challenges posed by LLMs, which display emergent capabilities and are adaptable to a wide range of downstream tasks. In this article, we address that gap by outlining a novel blueprint for how to audit LLMs. Specifically, we propose a three-layered approach, whereby governance audits (of technology providers that design and disseminate LLMs), model audits (of LLMs after pre-training but prior to their release), and application audits (of applications based on LLMs) complement and inform each other. We show how audits, when conducted in a structured and coordinated manner on all three levels, can be a feasible and effective mechanism for identifying and managing some of the ethical and social risks posed by LLMs. However, it is important to remain realistic about what auditing can reasonably be expected to achieve. Therefore, we discuss the limitations not only of our three-layered approach but also of the prospect of auditing LLMs at all. Ultimately, this article seeks to expand the methodological toolkit available to technology providers and policymakers who wish to analyse and evaluate LLMs from technical, ethical, and legal perspectives.

Some EU Countries (e.g. Finland) already have messaging app based triage and "first visit" to better meet the healthcare demand by their population. By relying of messaging, 1 doctor can respond to 50-ish patients requests per hour, instead of 4-5/h. The growing competence of LLMs is, through this lens, clearly appealing. The pivot point will be made by those who will study and deploy an appropriate hand-over procedure to leave the exchanges in the hands of human who wasn't yet involved, when the machine is beyond its ideal conditions of confidence... and when security at scale will be built in these chatty parrots we call AI, of course ;)

Machine learning can easily produce false positives when the test set is wrongly used. Just et al in Nature HumBehav suggested that ML can identify suicidal ideation extremely well from fMRI and we were skeptical. Today retraction and our analysis of what went wrong came out.

So what went wrong? The authors apparently used the test data to select features. Obvious mistake. A reminder for everyone into ML: never use the test set for *anything* but testing. Only practical way to do so in medicine? Lock away the test set till algorithm is registered.

https://www.nature.com/articles/s41562-023-01560-6

Side note: it took 3 years to go through the process of demonstrating that the paper went wrong. Journals need procedures to accelerate this.

BTW, pay attention: Confound Removal in Machine Learning Leads to Leakage https://arxiv.org/abs/2210.09232 

The decree of the French fast track, PECAN (prise en charge anticipée des dispositifs médicaux numériques) has been published. 

The PECAN pathway will accelerate the reimbursement of innovative digital medical devices (with a therapeutic or tele-monitoring function) allowing a fastest access to patients. DMD manufacturers can start the process of registering to PECAN by getting the following assessments in parallel: 

- Evaluation by the CNEDiMTS (HAS) 

- tool: platform EVATECH - ~60 days 

The HAS also offers free private and confidential meetings to discuss the clinical trial protocol, and any questions you might have on the evaluation process.

 - Certification of interoperability and security delivered by the ANS (Digital Health Agency) 

- tool: platform Convergence - ~60 days 

The certification requirements for interoperability and security are written in compliance with the ISO 10781 standard. 

This standard defines, in the form of requirements, the minimum level of guarantees expected for which the ANS will issue a certificate of conformity for DMDs

Citizens CONDITIONALLY support the secondary use of health data as long as it matches their ethical values! 

Citizen Demands: 

I. The Data Relationship 

1. Being able to access information about the secondary use of health data, in an understandable way, allowing them to be more engaged 

2. Having access to their data and know how they are used for secondary purposes. However, they want to choose how and when they are informed about the uses of their data 

3. That their values should inform what is beneficial to individuals and what constitutes the common good 

4. That decision-making processes rely on a plurality of views and actors to increase their trustworthiness, as for them the latter depends on who is involved in these instances 

5. Being given the opportunity to be involved in the lifecycle of health data, as they need to be engaged on a continuous basis. Otherwise, their relationship with data custodians and users can deteriorate 

II. The Power Balance 

6. Being provided with the opportunity for meaningful and active decision-making in the secondary use of health data, as they value the ability to exercise control 

7. To ensure the protection of individuals' identity, which they perceive as one of the best ways to balance the harms and benefits of the secondary use of health data 

8. That data users' intentions should be transparent and in line with purposes citizens support, as they think some users might share their values more than others 

9. That accountability could be enhanced through transparent and stronger mechanisms 

10. To foster good IT solutions to protect their data, beyond having a strong legal framework in place 

III. A Citizen-powered Framework 

11. That stakeholders respect principles that align with citizens' ethical values 

12. Having a framework which facilitates the secondary use of health data for purposes and benefits that they support, while minimising the potential risks they identify. 

AgID e AGENAS hanno sottoscritto un accordo quadro per la realizzazione di appalti innovativi nel settore della sanità.

 

AGENAS è l’ente pubblico nazionale che svolge attività di ricerca e di supporto nei confronti del Ministro della salute, delle Regioni e delle Province Autonome di Trento e Bolzano. Con decreto legge 27 gennaio 2022 n. 4 assume anche il ruolo di Agenzia nazionale per la sanità digitale (ASD) con l’obiettivo di guidare e assicurare il potenziamento della digitalizzazione dei servizi e dei processi in sanità.

-originally shared on LinkedIN by Suchi Saria-
Seeing ChatGPT and healthcare in the same sentence these days often scares me. Very often, the proclamation is about how ChatGPT will solve healthcare, starting with a small demo example and drawing far reaching conclusions from there. ChatGPT is a word predictor (given past words/context). It generates plausible ideas in a way that mimics human speech; so it sounds intelligent but is not designed to be factually grounded/correct. This article (on Cerner’s blog nonetheless) discusses using ChatGPT to create clinical vignettes (reasonable if verified by an expert) and potentially a diagnosis tool (bad plan b/c of how ChatGPT works!): https://lnkd.in/eNMwEBDK This is not to say that we haven’t made ground breaking progress in AI. We have. And we’ve even built amazing AI tech to serve as a clinical co-pilot / augmentation tool for clinicians to do all sorts of useful things including assist w/ diagnoses: see https://lnkd.in/e_Wkb9cS by Bayesian Health But doing so meant leveraging the AI ingredients to build recipes that prioritize veracity, trustworthiness, transparency, robustness, and remove human bias. More broadly, in a time of AI hype, enthusiastic declarations are being made by many in a rush to leverage PR to their advantage. When the PRs are coming from big companies, folks are even more wanting to go along. Highly speculative ideas confidently written w/o proof benefits no one: not the people building these systems nor their end users (in James Vincent’s words).

Lo scorso settembre il Garante della Privacy ha espresso parere negativo sullo schema di decreto che prevedeva la realizzazione della nuova banca dati denominata Ecosistema Dati Sanitari (EDS), prevista dalla riforma del Fascicolo sanitario elettronico. Il Garante ha chiesto anche di correggere un secondo schema di decreto, per favorire e migliorare l’implementazione a livello nazionale del Fascicolo sanitario elettronico (FSE).

La bocciatura del Garante, di cui potete trovare una spiegazione in questo post, è stata determinata dalla mancanza di un quadro normativo specifico per il nuovo FSE e per l’EDS. Quadro normativo che ancora oggi, a sei mesi dallo stop, non è stato ancora definito.


Download Medical Device Innovation Consortium (MDCI)'s new "Landscape Report & Industry Survey on the Use of Computational modeling & Simulation (CM&S) in Medical Device Development" available here: https://mdic.org/resource/cmslandscapereport/

Despite a number of initiatives undertaken by the EU in the last few years towards advancing the development and uptake of AI technologies to help EU citizens better monitor their health, receive better diagnoses and more personalized treatments, as well as live a healthier and more independent life, current situation in the EU indicates that healthcare organisations are slow in implementing AI technologies in healthcare and that the level of adoption is low overall. To achieve its long-term objective of the effective implementation of AI in the healthcare sector the Commission plans to work on a common legislation and policy framework to yield the benefits that AI can bring. Based on evidence gathered in this study, while most EU MS that have developed AI strategies identify healthcare as a priority sector, there are no policies within those strategies targeting healthcare in particular. At the same time, EU MS have made progress in proposing regulatory frameworks around the management of health data which is a foundational element for the further development of AI technologies in the healthcare sector. In terms of adoption, while healthcare organisations in the EU are open to adopting AI applications, at present, adoption is still limited to specific departments, teams and application areas. The lack of trust in AI-driven decision support is hindering the wider adoption, while issues around integrating new technologies into current practice are also prominent challenges identified by relevant stakeholders in EU MS. The scientific output of EU MS in the area of AI in healthcare is largely attributed to the larger EU MS which are also the most active in collaborating between each other and with smaller MS. Additionally, a need is identified for further financial support to support the development of AI technologies which are translated into clinical practice, including support targeting the acquisition of Intellectual Property (IP) rights for the developed technologies. To promote the development and adoption of AI technologies in the healthcare sector, the Commission may address challenges related to policy supporting the further development and adoption of AI in healthcare, increase investment, enable the access, use and exchange of healthcare data, and develop initiatives to upskill healthcare professionals and to educate AI Developers on current clinical practices. Addressing culture issues around trust in the use of AI in the healthcare sector and creating or updating policy supporting the translation of research into clinical practice were also important insights extracted from this study.

Would you like to check older news? Visit the archive!