The logical end of the ‘Solution to bad speech is better speech’ has arrived in the age of state-sponsored social media propaganda bots versus AI-driven bots arguing back
The logical end of the ‘Solution to bad speech is better speech’ has arrived in the age of state-sponsored social media propaganda bots versus AI-driven bots arguing back
LLMs are at least a quaternary(?) source. They’re scraping secondary/tertiary sources. As such they’re little better than asking passersby on the street. You might get a general idea of what the zeitgeist is, but how true any particular statement actually is will vary wildly.
Math itself is designed to describe relationships between things. That doesn’t mean you can’t mock up a ‘reasonable seeming’ equation that is absolute nonsense after further examination, but that a layman will take as ‘true enough’.
LLMs don’t cite things. They provide an approximation of what a human might write. They don’t know what they’re writing or how it relates to the ‘real world’ any more than any other centerpiece of a Chinese Room.