In the test here, it literally only handled text. Doctors can do that. And if you need a doctor to check its work in every case, it has saved zero hours of work for doctors.
how high processing power computers with AI/LLM’s can assist in a lab and/or hospital environment
This is an enormously broader scope than the situation I actually responded to, which was LLMs making diagnoses and then getting their work checked by a doctor
In the example you provided, you’re doing it by hand afterwards anyway. How is a doctor going to vet the work of the AI without examining the case in as much detail as they would have without the AI?
Input symptoms and patient info -> spits out odds they have x, y, or z -> doctor looks at that as a supplement to their own work or to look for more unlikely possibilities they haven’t thought of because they’re a bit unusual. Doctors aren’t gods, they can’t recall everything perfectly. It’s as useful as any toxicology report or other information they get.
I am not doing my edits by hand. I am not using a blade tool and spooling film. I am not processing it. My computer does everything for me, I simply tell it what to do and it spits out the desired result (usually lol). Without my eyes and knowledge the inputs aren’t good and the outputs aren’t vetted. With a person, both are satisfied. This is how all computer usage basically works, and AI tools are no different. Input->output, quality depends on the computer/software and who is handling it.
Usually to do work that needs done but does not need the direct attention of the more skilled person. The assistant can do that work by themselves most of the time. In the example above, the assistant is doing all of the most challenging work and then the doctor is checking all of its work
I don’t mind so long as all results are vetted by someone qualified. Zero tolerance for unfiltered AI in this kind of context.
If you need someone qualified to examine the case anyway, what’s the point of the AI?
The ai can examine hundreds of thousands of data points in ways that a human can not
In the test here, it literally only handled text. Doctors can do that. And if you need a doctor to check its work in every case, it has saved zero hours of work for doctors.
asdfasfasf
This is an enormously broader scope than the situation I actually responded to, which was LLMs making diagnoses and then getting their work checked by a doctor
Residents need their work checked also. I don’t understand your point.
deleted by creator
In the example you provided, you’re doing it by hand afterwards anyway. How is a doctor going to vet the work of the AI without examining the case in as much detail as they would have without the AI?
Input symptoms and patient info -> spits out odds they have x, y, or z -> doctor looks at that as a supplement to their own work or to look for more unlikely possibilities they haven’t thought of because they’re a bit unusual. Doctors aren’t gods, they can’t recall everything perfectly. It’s as useful as any toxicology report or other information they get.
I am not doing my edits by hand. I am not using a blade tool and spooling film. I am not processing it. My computer does everything for me, I simply tell it what to do and it spits out the desired result (usually lol). Without my eyes and knowledge the inputs aren’t good and the outputs aren’t vetted. With a person, both are satisfied. This is how all computer usage basically works, and AI tools are no different. Input->output, quality depends on the computer/software and who is handling it.
TL;DR: Garbage in, garbage out.
Why do skilled professionals have less-skilled assistants?
Usually to do work that needs done but does not need the direct attention of the more skilled person. The assistant can do that work by themselves most of the time. In the example above, the assistant is doing all of the most challenging work and then the doctor is checking all of its work