Chat gpt said things you can evaluate, which i did by googling it. And when i could not find the event in question, i went back into Lemmy and asked for more information. So tell me where i err’d? Was it not taking the posters word on it? Or trying to get context in the first place?
Once when you flat out failed to find anything using Google, when other people clearly had no trouble at all. If you’re telling the truth, this just means you suck at Google. There’s no reason to be googling chatgpt’s hallucinations instead of searching for the stuff an actual human told you about.
The second time was when you took chatgpt seriously. Just don’t. It’s a very expensive toy that occasionally does something cool. We’re still trying to figure out if it’s actually useful for anything, or if it’s just really good at appearing useful.
Two: when google did not return anything useful, for WHATEVER REASON, i didn’t come back and assume the event didn’t happen, i asked for MORE info, like a good little netizen.
It’s very clear that you “attempted” to call the original commenter out for false information, just ‘subtly’ in case you were wrong, and got called out yourself for it.
Your defensiveness is just making it more clear. That’s why you’re getting down voted.
Incorrect. I don’t know Paul Wellstone. But the poster brought up that trump had a Paul Wellstone event. So i wanted to know what happened. Instead of attempting to fabricate meaning that isn’t there, how about not flying off the handle. I am defensive because to an earnest question i was instantly down voted. I even brought up an event that was close to what was talked about and asked if there was another he was talking about.
You can retrieve sources from chat gpt. And that is besides the point that i didn’t simply rely on gpt. Even without prompting, i did my own digging on google, found his wiki page looked up articles about Paul and filming at a memorial and only found the incident from 2002. Thats two more paths to sources that failed me.
Chat gpt is a tool that is useful if used right, but even i did not take its word for it.
Chat GPT often makes mistakes. They call them “hallucinations”. And at one point it completely made up court cases that got two lawyers sanctioned for using.
Yep, no doubt. I have used chat gpt extensibly and have found it hallucinating on my own questions. It was not the case when it referred to the 2002 event, but i know it does that. It is a tool like google. And google puts pseudoscience and conspiracy theories at the top of the list sometimes too when trying to fact find. You have to know the limitations of what it is capable of. Case in point, when i asked about this event, i didn’t assume gpt answer was correct, google gave links exclusively to coverage of the the 2002 event, completely ignoring the Vietnam portion of my query. And i still returned to ask the poster for more info to get context. I don’t know what more people could have wanted from me.
The same reason people use google to look something up instead of going to the library
Google returns sources that you can evaluate for accuracy.
Chatgpt just says things.
Every output of chatgpt should end with “source: just trust me bro”.
Chat gpt said things you can evaluate, which i did by googling it. And when i could not find the event in question, i went back into Lemmy and asked for more information. So tell me where i err’d? Was it not taking the posters word on it? Or trying to get context in the first place?
You
err’dfucked-up twice.Once when you flat out failed to find anything using Google, when other people clearly had no trouble at all. If you’re telling the truth, this just means you suck at Google. There’s no reason to be googling chatgpt’s hallucinations instead of searching for the stuff an actual human told you about.
The second time was when you took chatgpt seriously. Just don’t. It’s a very expensive toy that occasionally does something cool. We’re still trying to figure out if it’s actually useful for anything, or if it’s just really good at appearing useful.
Ok, one: chill the fuck out
Two: when google did not return anything useful, for WHATEVER REASON, i didn’t come back and assume the event didn’t happen, i asked for MORE info, like a good little netizen.
Three: the event chat gpt referenced was NOT a hallucination: https://slate.com/news-and-politics/2002/10/paul-wellstone-s-memorial-service-turns-into-a-pep-rally.html Surprise! When i looked up Paul Wellstone and filming at a memorial , this is the event i found for the first page.
Four: me bringing up chat gpt was due to just how uncharacteristic it shut down my query. So i did my due diligence. Chill out.
It’s very clear that you “attempted” to call the original commenter out for false information, just ‘subtly’ in case you were wrong, and got called out yourself for it.
Your defensiveness is just making it more clear. That’s why you’re getting down voted.
Incorrect. I don’t know Paul Wellstone. But the poster brought up that trump had a Paul Wellstone event. So i wanted to know what happened. Instead of attempting to fabricate meaning that isn’t there, how about not flying off the handle. I am defensive because to an earnest question i was instantly down voted. I even brought up an event that was close to what was talked about and asked if there was another he was talking about.
You can retrieve sources from chat gpt. And that is besides the point that i didn’t simply rely on gpt. Even without prompting, i did my own digging on google, found his wiki page looked up articles about Paul and filming at a memorial and only found the incident from 2002. Thats two more paths to sources that failed me.
Chat gpt is a tool that is useful if used right, but even i did not take its word for it.
Chat GPT often makes mistakes. They call them “hallucinations”. And at one point it completely made up court cases that got two lawyers sanctioned for using.
https://www.forbes.com/sites/mattnovak/2023/05/27/lawyer-uses-chatgpt-in-federal-court-and-it-goes-horribly-wrong/
Chat GPT is not a search engine no matter how much Bing tries to tell you it is.
Yep, no doubt. I have used chat gpt extensibly and have found it hallucinating on my own questions. It was not the case when it referred to the 2002 event, but i know it does that. It is a tool like google. And google puts pseudoscience and conspiracy theories at the top of the list sometimes too when trying to fact find. You have to know the limitations of what it is capable of. Case in point, when i asked about this event, i didn’t assume gpt answer was correct, google gave links exclusively to coverage of the the 2002 event, completely ignoring the Vietnam portion of my query. And i still returned to ask the poster for more info to get context. I don’t know what more people could have wanted from me.