IdleHans is a well-known poster on the Charlton Life forum, particularly famous for his unique sense of humor and witty posts. One of his most notable contributions is the creation of "The Charlton Life Book Club", which is a tongue-in-cheek series where he humorously reviews or comments on books. In addition, IdleHans is often recognized for his sharp and sarcastic commentary, and his ability to provide amusing takes on various topics, especially regarding Charlton Athletic and general football discussions.
His posts have a distinct style—often laced with dry humor, and he has garnered a loyal following due to his playful and irreverent approach to forum discussions. This reputation has made him one of the more iconic figures on the forum.
Is there a CL book club? I've never heard of it
My first thought as I read that was “I need to find that thread, how have I missed it?!!”
Released because the queries were made in an official capacity.
Can someone explain why this is a story please?
No article will be a story for everybody, particularly if you are proficient in the topic.
But for this one, many newspaper readers probably treat ChatGPT similarly to Google search: enter a query and you get a response.
The thing is that you are more likely to submit much more private information to ChatGPT. The query and the response are saved by the company, and in certain circumstances that interaction can be accessed relatively easily e.g. it is not encrypted (cf WhatsApp) and it is associated with a logged in user.
Being aware of such a distinction is no bad thing, hence why I think it is newsworthy.
Alternatively, here's the summary of ChatGPT's answer:
Released because the queries were made in an official capacity.
Can someone explain why this is a story please?
I caught an interview on R4 Today with the journo who submitted the request. He said he has been looking at the issue of AI being used by government for a while. He admitted that the Telegraph headline was needlessly catty (no surprise there) and that there was little to be concerned about with the questions in this case. However he worries that if it becomes a more embedded habit, there is a risk that duff information can inform government decision-making. Also today I saw an authoritative looking tweeted graph from research showing that AI search engines gave duff information in 60% of cases. (that’s why I like Perplexity, since it gives all the sources it used, I can follow up to make sure what it has answered can be relied on).
Released because the queries were made in an official capacity.
Can someone explain why this is a story please?
I caught an interview on R4 Today with the journo who submitted the request. He said he has been looking at the issue of AI being used by government for a while. He admitted that the Telegraph headline was needlessly catty (no surprise there) and that there was little to be concerned about with the questions in this case. However he worries that if it becomes a more embedded habit, there is a risk that duff information can inform government decision-making. Also today I saw an authoritative looking tweeted graph from research showing that AI search engines gave duff information in 60% of cases. (that’s why I like Perplexity, since it gives all the sources it used, I can follow up to make sure what it has answered can be relied on).
If it's 60% inaccurate, then it's far more reliable than the Telegraph
Among other things it shows the paid-for versions often doing worse than the free versions!
That's interesting. But I think it looks deeply into one aspect of using Generative AI which is widely different from many of the most valuable real-life uses.
The research shows that using AI gives a different response to using Google. Some might interpret that as "well AI can't be trusted, can it?". Whereas others might sensibly think "if I want a bit of information, I should rely on Google to fetch me that information, rather than using AI".
In reality, what is the practical use case that research is looking to investigate? In other words, when do you ever have a fact, and decide to use AI to prove the fact that you have that fact? It's very counter-intuitive.
The primary value of such an exercise is highlighting the critical limitations and risks inherent in AI citation accuracy. But AI chatbots should not currently be relied upon to verify or determine factual details already available from trusted sources, especially news. Instead, they’re more effectively utilised as supportive tools - summarising content, assisting with creativity or offering preliminary insights - while facts requiring high accuracy and proper attribution should always be confirmed through direct, reputable and authoritative sources.
I use a word processor on a computer. Sometimes, when I type stuff, it's rubbish. That's not the fault of the programme I am using.
Been playing around with GROK. It's actually quite fun - the way it responds is conversational and offers ideas to expand on what you ask it. Shame the company has the hand of Musk on it but so far it's been quite useful.
Been playing around with GROK. It's actually quite fun - the way it responds is conversational and offers ideas to expand on what you ask it. Shame the company has the hand of Musk on it but so far it's been quite useful.
It's quite funny when you ask Grok if some of the claims by the owner's best friend are true. More often than not, the statements are said to be false.
Comments
Sometimes the answers aren't what the owner of X would like to see.
https://www.theguardian.com/politics/2025/mar/13/technlogy-secretary-peter-kyle-asks-chatgpt-for-science-and-media-advice?CMP=Share_AndroidApp_Other
Released because the queries were made in an official capacity.
But for this one, many newspaper readers probably treat ChatGPT similarly to Google search: enter a query and you get a response.
The thing is that you are more likely to submit much more private information to ChatGPT. The query and the response are saved by the company, and in certain circumstances that interaction can be accessed relatively easily e.g. it is not encrypted (cf WhatsApp) and it is associated with a logged in user.
Being aware of such a distinction is no bad thing, hence why I think it is newsworthy.
Alternatively, here's the summary of ChatGPT's answer:
https://www.cjr.org/tow_center/we-compared-eight-ai-search-engines-theyre-all-bad-at-citing-news.php
Among other things it shows the paid-for versions often doing worse than the free versions!
The research shows that using AI gives a different response to using Google. Some might interpret that as "well AI can't be trusted, can it?". Whereas others might sensibly think "if I want a bit of information, I should rely on Google to fetch me that information, rather than using AI".
In reality, what is the practical use case that research is looking to investigate? In other words, when do you ever have a fact, and decide to use AI to prove the fact that you have that fact? It's very counter-intuitive.
The primary value of such an exercise is highlighting the critical limitations and risks inherent in AI citation accuracy. But AI chatbots should not currently be relied upon to verify or determine factual details already available from trusted sources, especially news. Instead, they’re more effectively utilised as supportive tools - summarising content, assisting with creativity or offering preliminary insights - while facts requiring high accuracy and proper attribution should always be confirmed through direct, reputable and authoritative sources.
I use a word processor on a computer. Sometimes, when I type stuff, it's rubbish. That's not the fault of the programme I am using.