Changing the metaphor for LLM unreliability: they're bullshitting, not hallucinating:
https://www.psypost.org/scholars-ai-isnt-hallucinating-its-bullshitting/
@rdnielsen 100% agree, I wrote about this last year when I made the (Medieval) Content Farm. How LLMs work makes them perfect bullshit filter disruptors.
@rdnielsen Long said LLMs are basically the unearned confidence of a mediocre white man coded into software.
@rdnielsen I love the reframing and I love even more that the people suggesting it are from the University of Glasgow. We need more Glaswegian language usage on an internet often dominated by the prudishness of American Language use.