Attention: Please take a moment to consider our terms and conditions before posting.
Chat GPT
Comments
-
Chizz said:Nadou said:Interestingly, Chat GTP has not learned from its mistakes and the correct information I gave it three times with regards to the setting of "Just Like Tom Thumb's Blues" and it continues to pump out nonsense about the song - nonsense that varies each time I ask.2
-
fenlandaddick said:Chizz said:Nadou said:Interestingly, Chat GTP has not learned from its mistakes and the correct information I gave it three times with regards to the setting of "Just Like Tom Thumb's Blues" and it continues to pump out nonsense about the song - nonsense that varies each time I ask.
The skill is asking the right question in the right manner.0 -
0 -
Nadou said:fenlandaddick said:Chizz said:Nadou said:Interestingly, Chat GTP has not learned from its mistakes and the correct information I gave it three times with regards to the setting of "Just Like Tom Thumb's Blues" and it continues to pump out nonsense about the song - nonsense that varies each time I ask.
The skill is asking the right question in the right manner.It is just hallucinating, LLM do not like saying 'im not sure'. So if they can't formulate the answer they will make it up, or try to get close. Or perhaps their model has some poor data in it that is causing the mistake.Just think of a person in this situation, when most people don't know they bullshit. Salespeople are quite good at this.Try to position your question differently, you may find the answer is corrected0 -
fenlandaddick said:Nadou said:fenlandaddick said:Chizz said:Nadou said:Interestingly, Chat GTP has not learned from its mistakes and the correct information I gave it three times with regards to the setting of "Just Like Tom Thumb's Blues" and it continues to pump out nonsense about the song - nonsense that varies each time I ask.
The skill is asking the right question in the right manner.It is just hallucinating, LLM do not like saying 'im not sure'. So if they can't formulate the answer they will make it up, or try to get close. Or perhaps their model has some poor data in it that is causing the mistake.Just think of a person in this situation, when most people don't know they bullshit. Salespeople are quite good at this.Try to position your question differently, you may find the answer is corrected4 -
Apparently answers change day by day depending who is using it…so if a bunch of numpties are prompting it on the day you try, be prepared for rubbish answers. Sounds a bit moody to me.2
-
Arthur_Trudgill said:fenlandaddick said:Nadou said:fenlandaddick said:Chizz said:Nadou said:Interestingly, Chat GTP has not learned from its mistakes and the correct information I gave it three times with regards to the setting of "Just Like Tom Thumb's Blues" and it continues to pump out nonsense about the song - nonsense that varies each time I ask.
The skill is asking the right question in the right manner.It is just hallucinating, LLM do not like saying 'im not sure'. So if they can't formulate the answer they will make it up, or try to get close. Or perhaps their model has some poor data in it that is causing the mistake.Just think of a person in this situation, when most people don't know they bullshit. Salespeople are quite good at this.Try to position your question differently, you may find the answer is correctedI do not treat it as the Oracle. It is a tool and just like any other tool you should use multiple sources to check the results/answer.But then I never believe what I am told until I have checked it.
0 -
Chizz said:
I have just asked it about which books I have written. It recognises that I am a writer but the responses about the books it claims I have written are hilarious. Any kid doing research on my books (it does happen) who uses this facility will be sorely misled.
I have no particular axe to grind but having believed what it told me about things I needed to know, I am disappointed to find as a result of two little experiments that I cannot be sure that it is giving me accurate information.3 -
Nadou said:Chizz said:
I have just asked it about which books I have written. It recognises that I am a writer but the responses about the books it claims I have written are hilarious. Any kid doing research on my books (it does happen) who uses this facility will be sorely misled.
I have no particular axe to grind but having believed what it told me about things I needed to know, I am disappointed to find as a result of two little experiments that I cannot be sure that it is giving me accurate information.0 - Sponsored links:
-
Chizz said:Nadou said:Chizz said:
I have just asked it about which books I have written. It recognises that I am a writer but the responses about the books it claims I have written are hilarious. Any kid doing research on my books (it does happen) who uses this facility will be sorely misled.
I have no particular axe to grind but having believed what it told me about things I needed to know, I am disappointed to find as a result of two little experiments that I cannot be sure that it is giving me accurate information.3 -
Nadou said:Chizz said:Nadou said:Chizz said:
I have just asked it about which books I have written. It recognises that I am a writer but the responses about the books it claims I have written are hilarious. Any kid doing research on my books (it does happen) who uses this facility will be sorely misled.
I have no particular axe to grind but having believed what it told me about things I needed to know, I am disappointed to find as a result of two little experiments that I cannot be sure that it is giving me accurate information.
My buddy, and now I too, use him to kickstart ourselves on a new project which can look a bit daunting when you start with a blank sheet of paper. You procrastinate a bit, maybe make a false start. Claude does all that, in a second or so, and you are fired up and ready to take it on from there.
Or more routine, our niece is a physiotherapist with her own clinic (we are eager customers too, she's a star). She has a nice website and asked us to help creating English language page texts which bear scrutiny by native EN speakers. Of course between my wife and I we were ideally qualified, but it looked like an elephant task. Until we got Claude involved. 15 minutes, the whole thing done. We looked at each other and sheepishly agreed we wouldn't tell Jana how we'd done it. But actually, it wasn't all done. Subsequently, when my wife went through it with Jana she found paragraphs that Claude had inexplicably just missed out. We went back to Claude and got him to translate the missing bits, now it's done, and Jana is very happy with us. Big task, crunched in minutes. And it was only a couple of years ago that I was paying a professional translator £40/hour to translate similar webpage texts.2 -
Nadou said:Chizz said:Nadou said:Chizz said:
I have just asked it about which books I have written. It recognises that I am a writer but the responses about the books it claims I have written are hilarious. Any kid doing research on my books (it does happen) who uses this facility will be sorely misled.
I have no particular axe to grind but having believed what it told me about things I needed to know, I am disappointed to find as a result of two little experiments that I cannot be sure that it is giving me accurate information.
ChatGPT has lots of useful advantages. Customer support and assistance - answering customers' text-based queries, 24x7; Quickly generating text content ideas for seed-corning blogs, articles, ad copy, etc; Simplifying complex education topics - it's very easy to be brought up to a level of understanding on almost any topic, in minutes; Software development and bug fixing; Streamlining business processes and workflows; Gathering market research; Wellness advice; Legal, policy and regulation advice; Basic language translation. The list is almost endless. I know this, because ChatGPT told me.6 -
Weegie Addick said:I was at a conference of entrepreneurs/ business owners where someone described AI as “the shitification of the internet”.This was a year ago and I’ve not heard or learnt much to change my mind since.
3 -
CHIZZ, thanks for that list of things that Chat GTP is good at, but "answering customers' text based queries" was what I asked it to do and it gave me incorrect information. Had I not known the answers, I would have taken them at face value. Then "brought up to a level of understanding on any topic". Great - that's what I hoped for when I asked about the Resistance in the Haute Loire, and about medical funding for living in France. But how can I trust those answers when I now know that it is fallible and capable of inventing total rubbish in response to an enquiry about a topic such as which books I have written?
4 -
Nadou said:CHIZZ, thanks for that list of things that Chat GTP is good at, but "answering customers' text based queries" was what I asked it to do and it gave me incorrect information. Had I not known the answers, I would have taken them at face value. Then "brought up to a level of understanding on any topic". Great - that's what I hoped for when I asked about the Resistance in the Haute Loire, and about medical funding for living in France. But how can I trust those answers when I now know that it is fallible and capable of inventing total rubbish in response to an enquiry about a topic such as which books I have written?
Or give it a routine: "Before answering any future questions I have, create a checklist and ensure, before sharing your reply, that: a) The information you are giving me comes from a reputable source, b) You have fact-checked that against other sources; and c) you are sharing the key source in your answer.
Also, according to its problem page, ChatGPT was unable to access the web to search for information 13-14 Jan, so that may have had some impact on your results.2 -
Thanks Chunes. It is still giving me hilariously untrue information. When I asked for details about a book it claimed I have written, The Summer of My Italian Grandmother, it gave me a page long detailed plot plus analysis of the writing style etc. The book does not exist - I checked it on Amazon and Goodreads, the latter by the way was quoted by Chat GTP as one of the sources.
And thanks for the tip about Perplexity. I asked there about my books and was given a totally correct list of them, plus full and accurate details about the general themes and style.
I will, therefore, be more trusting of any response it gives me about stuff I don't know the answer to.2 -
Nadou said:CHIZZ, thanks for that list of things that Chat GTP is good at, but "answering customers' text based queries" was what I asked it to do and it gave me incorrect information. Had I not known the answers, I would have taken them at face value. Then "brought up to a level of understanding on any topic". Great - that's what I hoped for when I asked about the Resistance in the Haute Loire, and about medical funding for living in France. But how can I trust those answers when I now know that it is fallible and capable of inventing total rubbish in response to an enquiry about a topic such as which books I have written?0
-
Chizz said:Nadou said:CHIZZ, thanks for that list of things that Chat GTP is good at, but "answering customers' text based queries" was what I asked it to do and it gave me incorrect information. Had I not known the answers, I would have taken them at face value. Then "brought up to a level of understanding on any topic". Great - that's what I hoped for when I asked about the Resistance in the Haute Loire, and about medical funding for living in France. But how can I trust those answers when I now know that it is fallible and capable of inventing total rubbish in response to an enquiry about a topic such as which books I have written?
I was that customer wanting a definitive answer to a specific question on a tightly defined subject. It came up with gobbledygook.3 - Sponsored links:
-
Nadou said:Chizz said:Nadou said:CHIZZ, thanks for that list of things that Chat GTP is good at, but "answering customers' text based queries" was what I asked it to do and it gave me incorrect information. Had I not known the answers, I would have taken them at face value. Then "brought up to a level of understanding on any topic". Great - that's what I hoped for when I asked about the Resistance in the Haute Loire, and about medical funding for living in France. But how can I trust those answers when I now know that it is fallible and capable of inventing total rubbish in response to an enquiry about a topic such as which books I have written?
I was that customer wanting a definitive answer to a specific question on a tightly defined subject. It came up with gobbledygook.0 -
Chizz said:Nadou said:Chizz said:Nadou said:CHIZZ, thanks for that list of things that Chat GTP is good at, but "answering customers' text based queries" was what I asked it to do and it gave me incorrect information. Had I not known the answers, I would have taken them at face value. Then "brought up to a level of understanding on any topic". Great - that's what I hoped for when I asked about the Resistance in the Haute Loire, and about medical funding for living in France. But how can I trust those answers when I now know that it is fallible and capable of inventing total rubbish in response to an enquiry about a topic such as which books I have written?
I was that customer wanting a definitive answer to a specific question on a tightly defined subject. It came up with gobbledygook.0 -
Chizz said:Nadou said:Chizz said:Nadou said:CHIZZ, thanks for that list of things that Chat GTP is good at, but "answering customers' text based queries" was what I asked it to do and it gave me incorrect information. Had I not known the answers, I would have taken them at face value. Then "brought up to a level of understanding on any topic". Great - that's what I hoped for when I asked about the Resistance in the Haute Loire, and about medical funding for living in France. But how can I trust those answers when I now know that it is fallible and capable of inventing total rubbish in response to an enquiry about a topic such as which books I have written?
I was that customer wanting a definitive answer to a specific question on a tightly defined subject. It came up with gobbledygook.
I work with Freedom of Information requests and, for most people asking for information, those looking for answers don't generally have the level of knowledge to ask a specific question on a tightly-defined subject. Those that do generally know how to access the material without needing whatever skills it is that LLMs bring to the party.
I'm really dubious that such systems will actually improve matters, or reduce the time taken to generate accurate information, beyond very specific and limited usage. I can absolutely see AI working on raw data (provided it is trained in a way that makes sense to the specific user need), but I do not see how it is of wider generic benefit (and the horror stories coming from academic or legal sources would suggest it has the potential to really bugger things up in a way that we have not seen IT do to date). We're being sold a universal panacea as a solution to problems that often don't actually exist, IMHO, because I don't believe that a requirement to actually do research is a problem.
5 -
I just listened to the first episode of Virtually Parkinson where he it interviews Jason Derulo.
It was an underwhelming experience - not Jason Derulo's fault - he gave long answers to cover (I think) for the lack of human interaction. I read somewhere that this interview could have been conducted by email - which seems about right
After the interview (below) about two thirds along, the programmers(?) discuss how they felt it went and how they intend to improve the model before the next interview - that was marginally more interesting.
However, had my dear, departed Nan still been with us I'm sure she'd have said, "Gertcha, too clever by half ... too clever by half". (No argument from me).
https://podfollow.com/1789767151/view0 -
NornIrishAddick said:Chizz said:Nadou said:Chizz said:Nadou said:CHIZZ, thanks for that list of things that Chat GTP is good at, but "answering customers' text based queries" was what I asked it to do and it gave me incorrect information. Had I not known the answers, I would have taken them at face value. Then "brought up to a level of understanding on any topic". Great - that's what I hoped for when I asked about the Resistance in the Haute Loire, and about medical funding for living in France. But how can I trust those answers when I now know that it is fallible and capable of inventing total rubbish in response to an enquiry about a topic such as which books I have written?
I was that customer wanting a definitive answer to a specific question on a tightly defined subject. It came up with gobbledygook.
I work with Freedom of Information requests and, for most people asking for information, those looking for answers don't generally have the level of knowledge to ask a specific question on a tightly-defined subject. Those that do generally know how to access the material without needing whatever skills it is that LLMs bring to the party.
I'm really dubious that such systems will actually improve matters, or reduce the time taken to generate accurate information, beyond very specific and limited usage. I can absolutely see AI working on raw data (provided it is trained in a way that makes sense to the specific user need), but I do not see how it is of wider generic benefit (and the horror stories coming from academic or legal sources would suggest it has the potential to really bugger things up in a way that we have not seen IT do to date). We're being sold a universal panacea as a solution to problems that often don't actually exist, IMHO, because I don't believe that a requirement to actually do research is a problem.
So, if you're a customer of a company, using one of their products, you can ask - in whatever form of words works for you - questions about that product; and you'll be presented with responses that are completely informed by facts about that product.
If you want to know something, Google is good, but fallible. It's a better choice than ChatGPT for trying to find out a fact.
I think that, complaining that "ChatGPT gave me a response that isn't true" is missing the point of generative AI in general and ChatGPT in particular.
Generative AI and ChatGPT are enormously useful. To define them solely by a poor result in something they're not specifically designed for, isn't really helpful.0 -
3 -
I know I've passed out a couple of times due to low blood pressure, but I hadn't realised I'd missed a relegation to league two and an entire promotion season.
Apart from that, it's not the worst summary I've read.2 -
CAFCTrev said:
But it's not a good competitor for Google when it comes to establishing facts, except where it takes search results and passes them as its own AI output.0 -
Interesting to see that Deep Seek - the lower-cost, Chinese-developed alternative to ChatGPT - has caused some pearl clutching among US tech stocks today
DeepSeek threat sends tech stocks tumbling and the Fed will be watching
0 -
I heard on the radio tonight that there is a real security threat when using Deep seek. Lots of warnings like this around.
DeepSeek AI will fully compromise your digital security and personal privacy. You will expose all passwords, usernames, and accounts across every device you own. The Chinese government has unrestricted access to all your data, including your credentials, personal files, messages, and financial information.
DeepSeek actively records every keystroke, allowing them to capture everything you type—passwords, private messages, and sensitive information in real time. Once installed, their software exploits access to other devices on your network, spreading surveillance and control far beyond the original device.0