Attention: Please take a moment to consider our terms and conditions before posting.
Chat GPT and other AI guff
Comments
-
In my defence I've been ill and was in bed half asleep.Leroy Ambrose said:
📢 Irony klaxon... 😏cantersaddick said:
We're finding this too. It can't do analysis. Even simple questions it can't get write even when lots has been invested in training it.cafcpolo said:
Yup! I've spent the better part of this month reworking code deployed by a data analyst that was clearly knocked up by ChatGPT. Full of bugs and slow as fuck in our data warehouse. Their solution was to spent an extra £10k a month on more resources and licenses. Maybe just learn to code instead 🙄Dazzler21 said:
What field are you in, I've also seen it used in my field, Business Analysis but the amount of mistakes makes it completely unworkable. It takes twice the time due to corrections.Leroy Ambrose said:I tell you what, we can have a good old laugh at AI, but the implications of it - in my field at least are pretty terrifying.1 -
But does it know the difference between right and wrong. Or right and write ✍🏽cantersaddick said:
We're finding this too. It can't do analysis. Even simple questions it can't get write even when lots has been invested in training it.cafcpolo said:
Yup! I've spent the better part of this month reworking code deployed by a data analyst that was clearly knocked up by ChatGPT. Full of bugs and slow as fuck in our data warehouse. Their solution was to spent an extra £10k a month on more resources and licenses. Maybe just learn to code instead 🙄Dazzler21 said:
What field are you in, I've also seen it used in my field, Business Analysis but the amount of mistakes makes it completely unworkable. It takes twice the time due to corrections.Leroy Ambrose said:I tell you what, we can have a good old laugh at AI, but the implications of it - in my field at least are pretty terrifying.1 -
I think in a decade or less many people are not gonna be able to make a basic decision in life day to day without first consulting with their little AI 'friend'.Leroy Ambrose said:I'll give you a personal example of the current ludicrousness around AI. A lot of this might go over peoples' heads, but you'll get the basic gist of it, which is all that's really important.
Three weeks ago, I get a frantic message from one of the development directors at work on a Friday night. he's a lovely bloke, and intelligent with it. So my natural inclination is to believe him when he tells me that one of the API integrations between his application and part of the security stack that I manage is broken. I spend about 20 minutes troubleshooting it (there are a LOT of things that *could* be wrong, so I need to check all bases). Eventually, I put it down to a client-side issue (which is 'his' side of the equation), as I've literally ruled out anything it could be. He replies back telling me it's not at his end - it's DEFINITELY on 'my' side. So, to humour him, I regenerate the access token that he's using (exact same rights as the current one) and provide it. He replies to say it still isn't working. That's when I twig (after he copies the output from his prompts) that's it's not actually HIM, I'm talking to, but Claude (the AI our developers use for pre-development work, feature design etc). I ask him to send me the prompt history - and he gets a little indignant when I say that the AI is prone to hallucinations, and VERY persuasive about them. Eventually I figure out what's wrong (the AI is asking for the wrong field when attempting to enumerate records). I tell him to change the prompt accordingly, and it, of course, now works. The AI's messages back to him the entire time have been comprehensively certain that the issue was with the 'other side' - and comprehensively wrong. The kicker? Once he changed the prompt and started receiving results, the response was:
"SUCCESS!! The access issue has been resolved, and I can now see results..."
No mention of the fact that it weas talking absolute bollocks for an hour leading up to this, or an acknowledgement that the information it had provided (gleaned from its LLM's experience of generic API calls, rather than the specific API in question) was wrong.
But here's the real crux of the issue. This is an intelligent man - he's been a software engineer for decades. Done his time on multiple iterations of products, probably been through four or five different dev stacks in his working life, getting to know all of them inside out. He's been working with APIs for the best part of ten years. He knows them better than I know them (my experience tends to be related to the security aspects of them only). Yet because Claude 'told' him that it was right, he didn't even stop to think that one of the basic things you do with an API is get the endpoint and method correct when calling it.
Now imagine that the new iteration of developers doesn't have any of the past context he's got - built up over decades. They're not learning software development from scratch, they're learning vibecoding.
We're fucked.
As probably the last generation that had a youth pre the Internet and it's related tech pervading every aspect of life i noticed it myself with Google.
25 years ago to go on holiday youd generally walk into a travel agents and give a rough outline of what you're after and budget etc and get a brochure with a few choices.
Google turned that into a much broader range of choice but along with that comes the anxt of what if i make the wrong choice and paralysis by analysis.
Fast forward to 2026 and i know people who are using chat gpt to literally analyse every aspect of their lifes...from health issies to how to approach basic conversations with colleagues and friends.
Treating AI like some kind of trusted higher power. Some experienced colleagues are incapable now of making a decision without first having had reassurance from AI which more often than not is far from perfect if at all accurate.
It's concerning how people are surrendering their own initiative and self confidence/ self belief in their own judgement and ability and not really living their lives or making their own decisions but rather being led passively by an algorithm which may or may not be accurate or correct or even have their true best interests considered.
Lots of wonderful possibilities and efficiencies potentially with this technology but at what cost to basic human being is concerning in the same way the Internet and social media for all it's wonders has left many of us yearning for a less tech intrusive time.6 -
Throwing in mistakes like that is a way to make sure that AI can't learn from what I put online.Alwaysneil said:
But does it know the difference between right and wrong. Or right and write ✍🏽cantersaddick said:
We're finding this too. It can't do analysis. Even simple questions it can't get write even when lots has been invested in training it.cafcpolo said:
Yup! I've spent the better part of this month reworking code deployed by a data analyst that was clearly knocked up by ChatGPT. Full of bugs and slow as fuck in our data warehouse. Their solution was to spent an extra £10k a month on more resources and licenses. Maybe just learn to code instead 🙄Dazzler21 said:
What field are you in, I've also seen it used in my field, Business Analysis but the amount of mistakes makes it completely unworkable. It takes twice the time due to corrections.Leroy Ambrose said:I tell you what, we can have a good old laugh at AI, but the implications of it - in my field at least are pretty terrifying.3 -
FixedChunes said:
These Palace ultras are just statistical models that calculate which word is most likely to follow another word. They don't understand anything and can't develop consciousness, any more than a large spreadsheet could.Raith_C_Chattonell said:I recently read that researchers at Anthropic were poking and prodding their latest model of Claude Sonnett when it called them out.
The thing is, if it developing a consciousness what happens next? By the time it gets to its teenage years it'll be out to fuck you up completely.
0 -
The other week I asked ChatGPT for an up and coming restaurant in south east London. We want through different ones until we settled on one in Deptford - a curry place with an up
and coming chef who’s worked in some top kitchens and loads of details. I have OpenTable linked to my ChatGPT so I asked it to find a table available there, it then took a moment and said it couldn’t find it on OpenTable, but it did say it could give me a website. So I asked for a website, “I can’t find a website for this restaurant but I have an instagram” so I started to twig. I asked for the instagram, again to no avail. So I flat out asked it “did you just make this up?” Its response? “Ah! Good catch!”. I asked it why it had made it up and it basically boiled down to “it sounded plausible so I said it”.This was just for choosing a restaurant. Imagine this level of hallucination for things that actually matter.3





