Links

All links

LLMs also hallucinate in medical contexts

This shouldn't surprise anyone, but it turns out LLMs also make up stuff when used by doctors:

[Professors Allison Koenecke and Mona Sloane] determined that nearly 40% of the hallucinations were harmful or concerning because the speaker could be misinterpreted or misrepresented.

(From: Researchers say AI transcription tool used in hospitals invents things no one ever said | AP News)

The article lists some examples: the tools made up violence, racial details and medication out of thin air.