AIs claiming to solve medical problems or offering medical advice without certification and oversight

Richard Dufour @r_dufour Very proud to announce BioMistral, a collection of open-source pre-trained LLMs for the medical domain

Arxiv: http://tinyurl.com/3xk8hua6
BioMistral 7B model: http://tinyurl.com/mubkfprp

More info: http://tinyurl.com/5etvbvkz

@CNRSinformatics @LaboLS2N @taln_ls2n @LabrakYanis https://pic.twitter.com/E9nBzJVM9Y
Replying to @r_dufour @CNRSinformatics and 3 others

Spreading these around as “trained LLMs for the medical domain” is like putting actors in medical uniforms on television and letting them advise people. If you are going to “test” them, it has to be by independent means and probably means “certification”, just as humans are. If you let medical schools or government agencies “certify” AIs, the most likely outcome is AIs will be kept out of traditional human jobs. Or rich donors and lobbyists will buy their certifications and use it for hegemony.
 
Asking for “correct answers” to factoids is not the basis of judging human medical students. Rather testing in progressively more difficult tasks and assignment in real situations, with careful oversight, mentoring and testing (of the AIs) with real patients. If it is just “medical advice”, there are restrictions for that. If you say your AI is good at “medical answers” and people listen and are hurt, you might be the one liable. Even putting cute little medical shield emoji’s might get you in trouble.
 
I suggest you spend the money and time, and send your potential AI doctor to medical school. Let them take the courses, take the tests, record their work, and have it tested by independent groups. Since doctors have finite and variable memory, they usually challenge human doctors to work long hours and test them in real situations that are time critical and sometimes not under the best or optimal conditions – where lives and future health are at risk. With inflation it is hard to know precisely the value of a human life. But a small AI company cannot have its products kill or damage humans, or even frighten them – without insurance, assurance and proper oversight.
 
We ask humans to spend decades learning their skills and gaining real and practical experience. Your data is mostly still scraped from the low hanging “free” parts of the Internet. And the LLMs are still not indexing their source materials directly, always giving background, citations and logging their reasoning. If you want them to be trusted as humans, you have to test them as humans would be. If you ask your AI a few questions you choose, or have a popularity test by random humans, that is not “life-critical”.
 
The checks and balances in society are complex. As someone who has studied and watched and traced much of the Internet every day for the last 25 years, I can tell you how unique are the lives of humans, groups and situations outside the eyes and records of computers. Some shallow index of word sequences on the Internet, no matter labeled as “big context”, is only the product of a very narrow set of data indexing methods. LLMs of today are too dependent on the skill, honesty, reliability, and systems for verification and audit available to small groups of developers – and the billions of humans they might try to sell “solutions” to.
 
Please be a LOT more careful. Think at decades levels. A person who becomes a doctor now, spends decades and commits to a lifetime. If you train some baby AIs are you willing to lead them through grade school, high school, undergrad, medical schools, residencies and continuing education and oversight – comparable to one doctor? A real doctor might want to use a certified AI as a memory aid. If a doctor hired a human assistant, they would have to be certified. Trying to hide responsibility inside a black box build by a hidden group, taking no verbal responsibility – is just asking for trouble and risking lives and future health.
 
Yet, a medical school certainly ought to be using computer memory to store and bring to bear the valuea of global human knowledge. Imaging all countries, all books, all papers, all videos, all records, all recorded discussions, all AIs and humans. If a doctor or person runs into a situation and can get help from a computer memory or the Internet – that ought to be globally integrated, and all “medical” advice should be timely, traceable, verifiable, certified, recorded, auditable, accessible (not cost too much or take inside knowledge that is not accessible).
 
This is NOT a new thing. These issues have been discussed, but not carefully recorded and stored, for many decades, centuries, probably for books, societies, groups and individuals – well before electronic memory storage and retrieval.
It likely can reduce the time it takes to train high functioning medical assistants or remote nodes, starting under strict supervision and tested openly, recorded for all to see all of it.
I expect if I look for all the discussion for this on the Internet, as I often do for global issues, then there will be lots of words, but the groups will not put sufficient effort to editing and organizing what is said into workable, real time, global, accessible, verifiable global networks. It will mostly be words about things, not responsible and testable real AI-assisted processes, that really help for 8000 Million humans’ everyday lives.
 
Richard Collins, The Internet Foundation
Richard K Collins

About: Richard K Collins

The Internet Foundation Internet policies, global issues, global open lossless data, global open collaboration


Leave a Reply

Your email address will not be published. Required fields are marked *