Archive of News from 2023
Congratulazioni agli amici di Cardioonline!!!
Il monitoraggio cardiaco degli astronauti di Virgin Galactic è fatto dai loro esperti ^_^
Please note the use of the word SEARCH in these declarations by Google... we are in front of evasive medical entrepreneurship strategies at their best. When does this activity become illegitimate medical advice? Does the company get to decide this?
Check out the slides from "Scaling Radiology to Serve a Billion People" by Prodigi's @Bargava
The global governance of AI is getting more salient and more confusing. The authors look at ethical codes, industry governance, contracts/licensing, standards, int'l agreements and some domestic law. They then highlight that this global governance doesn't just respond to transboundary challenges but instead as a form of lobbying by both forestalling regulation and sigalling; as 'talking AI into being' by making it inevitable, by replacing roles of states, reshaping the role of organisations, and by marginalising global voices.
...and if you are interested, you should read this one about AI supply chain https://arxiv.org/abs/2304.14749v2
Some EU Countries (e.g. Finland) already have messaging app based triage and "first visit" to better meet the healthcare demand by their population. By relying of messaging, 1 doctor can respond to 50-ish patients requests per hour, instead of 4-5/h. The growing competence of LLMs is, through this lens, clearly appealing. The pivot point will be made by those who will study and deploy an appropriate hand-over procedure to leave the exchanges in the hands of human who wasn't yet involved, when the machine is beyond its ideal conditions of confidence... and when security at scale will be built in these chatty parrots we call AI, of course ;)
Machine learning can easily produce false positives when the test set is wrongly used. Just et al in Nature HumBehav suggested that ML can identify suicidal ideation extremely well from fMRI and we were skeptical. Today retraction and our analysis of what went wrong came out.
So what went wrong? The authors apparently used the test data to select features. Obvious mistake. A reminder for everyone into ML: never use the test set for *anything* but testing. Only practical way to do so in medicine? Lock away the test set till algorithm is registered.
https://www.nature.com/articles/s41562-023-01560-6
Side note: it took 3 years to go through the process of demonstrating that the paper went wrong. Journals need procedures to accelerate this.
BTW, pay attention: Confound Removal in Machine Learning Leads to Leakage https://arxiv.org/abs/2210.09232
The PECAN pathway will accelerate the reimbursement of innovative digital medical devices (with a therapeutic or tele-monitoring function) allowing a fastest access to patients. DMD manufacturers can start the process of registering to PECAN by getting the following assessments in parallel:
- Evaluation by the CNEDiMTS (HAS)
- tool: platform EVATECH - ~60 days
The HAS also offers free private and confidential meetings to discuss the clinical trial protocol, and any questions you might have on the evaluation process.
- Certification of interoperability and security delivered by the ANS (Digital Health Agency)
- tool: platform Convergence - ~60 days
A computer, ChatGPT, has now successfully passed the United States Medical Licensure Exam (USMLE ) without any training. However, put real world symptoms in it’s prompt and it gives a very canned response with extensive hedging and qualifiers, as if reading directly from WebMD! How should physicians think about these tools, and what role does AI have in the evolving digital health space? - Classical example of "just because we can, it doesn't mean we should"
AgID e AGENAS hanno sottoscritto un accordo quadro per la realizzazione di appalti innovativi nel settore della sanità.
AGENAS è l’ente pubblico nazionale che svolge attività di ricerca e di supporto nei confronti del Ministro della salute, delle Regioni e delle Province Autonome di Trento e Bolzano. Con decreto legge 27 gennaio 2022 n. 4 assume anche il ruolo di Agenzia nazionale per la sanità digitale (ASD) con l’obiettivo di guidare e assicurare il potenziamento della digitalizzazione dei servizi e dei processi in sanità.
-originally shared on LinkedIN by Suchi Saria-
Seeing ChatGPT and healthcare in the same sentence these days often scares me. Very often, the proclamation is about how ChatGPT will solve healthcare, starting with a small demo example and drawing far reaching conclusions from there. ChatGPT is a word predictor (given past words/context). It generates plausible ideas in a way that mimics human speech; so it sounds intelligent but is not designed to be factually grounded/correct. This article (on Cerner’s blog nonetheless) discusses using ChatGPT to create clinical vignettes (reasonable if verified by an expert) and potentially a diagnosis tool (bad plan b/c of how ChatGPT works!): https://lnkd.in/eNMwEBDK This is not to say that we haven’t made ground breaking progress in AI. We have. And we’ve even built amazing AI tech to serve as a clinical co-pilot / augmentation tool for clinicians to do all sorts of useful things including assist w/ diagnoses: see https://lnkd.in/e_Wkb9cS by Bayesian Health But doing so meant leveraging the AI ingredients to build recipes that prioritize veracity, trustworthiness, transparency, robustness, and remove human bias. More broadly, in a time of AI hype, enthusiastic declarations are being made by many in a rush to leverage PR to their advantage. When the PRs are coming from big companies, folks are even more wanting to go along. Highly speculative ideas confidently written w/o proof benefits no one: not the people building these systems nor their end users (in James Vincent’s words).
Lo scorso settembre il Garante della Privacy ha espresso parere negativo sullo schema di decreto che prevedeva la realizzazione della nuova banca dati denominata Ecosistema Dati Sanitari (EDS), prevista dalla riforma del Fascicolo sanitario elettronico. Il Garante ha chiesto anche di correggere un secondo schema di decreto, per favorire e migliorare l’implementazione a livello nazionale del Fascicolo sanitario elettronico (FSE).
La bocciatura del Garante, di cui potete trovare una spiegazione in questo post, è stata determinata dalla mancanza di un quadro normativo specifico per il nuovo FSE e per l’EDS. Quadro normativo che ancora oggi, a sei mesi dallo stop, non è stato ancora definito.
Revised Zero Draft [framework] Convention On Artificial Intelligence, Human Rights, Democracy And The Rule Of Law - The Council of Europe is working on a Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law. Drafting is ongoing on what will be the first international convention on AI. The committee responsible for drafting the Convention has produced a “zero” draft which is now being negotiated.
Download Medical Device Innovation Consortium (MDCI)'s new "Landscape Report & Industry Survey on the Use of Computational modeling & Simulation (CM&S) in Medical Device Development" available here: https://mdic.org/resource/cmslandscapereport/
On March the 10-11 2023 the Italian Society of Telemedicine will host its International Bologna Consensus Assembly: don't miss out!!!
The scientific program, as of Jan the 9th 2023, is available here