Please spread the word that #
ChatGP answers are statistical only, and can be absolutely WRONG.
Unaware users won't notice because at the same time it is the king of bluff 🃏
#
OpenAI should add a BIG WARNING to their console and API repos.
Guess the correct answer 👇👇👇
DataKnightmare
in reply to Gaël Duval - /e/OS & Murena • • •I have come up with a farily accurate notice:
... show more
I have come up with a farily accurate notice:
Also, you may find the last two episodes refreshing :)
https://www.spreaker.com/show/dataknightmare-en
DataKnightmare (English Version)
SpreakerGaël Duval - /e/OS & Murena
in reply to DataKnightmare • • •DataKnightmare
in reply to Gaël Duval - /e/OS & Murena • • •I hear you, but I have to firmly disagree. LLMs are by design exactly bullshit engines, in the definition of Frankfurt: they generate entirely plausible content in a sort of authoritative voice, with no relation whatsoever to facts. So they do not literally lie, but they neither tell the truth, since the truth value of everything that comes out of an LLM is not bound by facts.
Sorry if I seemed to suggest the term "bullshit" was just derogatory, it is actually an accepted term in academia when talking about bullshit.
Regarding the alleged disruptiveness, I would say so far it is an unsubstantiated marketing claim. Yes, LLMs can produce text. But unless that text is very thoroughly vetted by a competent human, I for one would not touch it with a stick. LLMs replacing journalists, lawyers and teachers? three answers:
1) it is a ridiculous claim
2) it is also very demeaning to socially important porofessions, and
3) with Weizenbaum, I say the point... show more
I hear you, but I have to firmly disagree. LLMs are by design exactly bullshit engines, in the definition of Frankfurt: they generate entirely plausible content in a sort of authoritative voice, with no relation whatsoever to facts. So they do not literally lie, but they neither tell the truth, since the truth value of everything that comes out of an LLM is not bound by facts.
Sorry if I seemed to suggest the term "bullshit" was just derogatory, it is actually an accepted term in academia when talking about bullshit.
Regarding the alleged disruptiveness, I would say so far it is an unsubstantiated marketing claim. Yes, LLMs can produce text. But unless that text is very thoroughly vetted by a competent human, I for one would not touch it with a stick. LLMs replacing journalists, lawyers and teachers? three answers:
1) it is a ridiculous claim
2) it is also very demeaning to socially important porofessions, and
3) with Weizenbaum, I say the point is not whether a machine can do a job, but if it should, and the answer I give myself is there are jobs which should not be automated, because we as humans would be degraded by that.
Thank you!
philosophical essay
Contributors to Wikimedia projects (Wikimedia Foundation, Inc.)