Can this artificial stuff be creative? For example could this stuff make up a credible love story between an octopus and a Prime Minister who has ADHD, set on the top of a mountain where the only nourishment is a Fortnum and Masons afternoon tea whilst a nuclear war is going on, but it has a happy ending for all the citizens of the world except racists and carnivores. All dialogue to be in the form of song or Morse code.
(Scene: A windswept mountaintop. The sky is fiery red with nuclear fallout. A table with a pristine Fortnum & Mason afternoon tea stands improbably in the middle. An octopus and the Prime Minister lock eyes. The Prime Minister begins to sing...)
Prime Minister: (nervously fidgeting with a teacup)
"I’m a leader with scatterbrain flair,
But I see your tentacles dancing there.
The world’s ablaze, but my mind’s on you,
Could love be our treaty? Could this be true?"
Octopus: (tentacles gracefully sway, singing in a deep, soulful tone)
"Prime Minister, your words are strange,
But I feel your heart through this nuclear exchange.
Your focus may wander, but here you’re steady,
For love, my tentacles are always ready."
(They clasp hands—or tentacles—over a plate of scones. A peace summit of emotion begins.)
Duet: (harmonising as they butter scones together)
(Scene: A windswept mountaintop. The sky is fiery red with nuclear fallout. A table with a pristine Fortnum & Mason afternoon tea stands improbably in the middle. An octopus and the Prime Minister lock eyes. The Prime Minister begins to sing...)
Prime Minister: (nervously fidgeting with a teacup)
"I’m a leader with scatterbrain flair,
But I see your tentacles dancing there.
The world’s ablaze, but my mind’s on you,
Could love be our treaty? Could this be true?"
Octopus: (tentacles gracefully sway, singing in a deep, soulful tone)
"Prime Minister, your words are strange,
But I feel your heart through this nuclear exchange.
Your focus may wander, but here you’re steady,
For love, my tentacles are always ready."
(They clasp hands—or tentacles—over a plate of scones. A peace summit of emotion begins.)
Duet: (harmonising as they butter scones together)
(Scene: A windswept mountaintop. The sky is fiery red with nuclear fallout. A table with a pristine Fortnum & Mason afternoon tea stands improbably in the middle. An octopus and the Prime Minister lock eyes. The Prime Minister begins to sing...)
Prime Minister: (nervously fidgeting with a teacup)
"I’m a leader with scatterbrain flair,
But I see your tentacles dancing there.
The world’s ablaze, but my mind’s on you,
Could love be our treaty? Could this be true?"
Octopus: (tentacles gracefully sway, singing in a deep, soulful tone)
"Prime Minister, your words are strange,
But I feel your heart through this nuclear exchange.
Your focus may wander, but here you’re steady,
For love, my tentacles are always ready."
(They clasp hands—or tentacles—over a plate of scones. A peace summit of emotion begins.)
Duet: (harmonising as they butter scones together)
"We’ll save the world with tea and jam,
No bombs, no fear, just clotted cream glam.
An octopus, a PM, an unlikely pair,
But love will rewrite despair."
This is about half of what it came up with.
Pretty good, but unfinished.
I didn't want to flood the thread. It could write the whole 90-minute show.
(Scene: A windswept mountaintop. The sky is fiery red with nuclear fallout. A table with a pristine Fortnum & Mason afternoon tea stands improbably in the middle. An octopus and the Prime Minister lock eyes. The Prime Minister begins to sing...)
Prime Minister: (nervously fidgeting with a teacup)
"I’m a leader with scatterbrain flair,
But I see your tentacles dancing there.
The world’s ablaze, but my mind’s on you,
Could love be our treaty? Could this be true?"
Octopus: (tentacles gracefully sway, singing in a deep, soulful tone)
"Prime Minister, your words are strange,
But I feel your heart through this nuclear exchange.
Your focus may wander, but here you’re steady,
For love, my tentacles are always ready."
(They clasp hands—or tentacles—over a plate of scones. A peace summit of emotion begins.)
Duet: (harmonising as they butter scones together)
"We’ll save the world with tea and jam,
No bombs, no fear, just clotted cream glam.
An octopus, a PM, an unlikely pair,
But love will rewrite despair."
This is about half of what it came up with.
Pretty good, but unfinished.
So was Beethoven's 10th, as was The Canterbury Tales!
Can this artificial stuff be creative? For example could this stuff make up a credible love story between an octopus and a Prime Minister who has ADHD, set on the top of a mountain where the only nourishment is a Fortnum and Masons afternoon tea whilst a nuclear war is going on, but it has a happy ending for all the citizens of the world except racists and carnivores. All dialogue to be in the form of song or Morse code.
Creative certainly. I don’t really have a use for Chat GTP for anything other than amusement these days but by way of an example. It’s my birthday in a few weeks and my daughter has persuaded me in my dotage to have a tattoo. Not fussed to be honest but I thought why not. What does it matter. Couldn’t think of anything I really wanted, not even a CAFC tattoo so I went for the initials of my two children and three grandchildren. Hardly inventive. I asked ChatGTP to come up with a design incorporating the letters and after a couple of tweaks I’m really pleased and impressed with the outcome. Not something a traditional search engine could begin to do.
(Scene: A windswept mountaintop. The sky is fiery red with nuclear fallout. A table with a pristine Fortnum & Mason afternoon tea stands improbably in the middle. An octopus and the Prime Minister lock eyes. The Prime Minister begins to sing...)
Prime Minister: (nervously fidgeting with a teacup)
"I’m a leader with scatterbrain flair,
But I see your tentacles dancing there.
The world’s ablaze, but my mind’s on you,
Could love be our treaty? Could this be true?"
Octopus: (tentacles gracefully sway, singing in a deep, soulful tone)
"Prime Minister, your words are strange,
But I feel your heart through this nuclear exchange.
Your focus may wander, but here you’re steady,
For love, my tentacles are always ready."
(They clasp hands—or tentacles—over a plate of scones. A peace summit of emotion begins.)
Duet: (harmonising as they butter scones together)
"We’ll save the world with tea and jam,
No bombs, no fear, just clotted cream glam.
An octopus, a PM, an unlikely pair,
But love will rewrite despair."
This is about half of what it came up with.
Pretty good, but unfinished.
I didn't want to flood the thread. It could write the whole 90-minute show.
(Scene: A windswept mountaintop. The sky is fiery red with nuclear fallout. A table with a pristine Fortnum & Mason afternoon tea stands improbably in the middle. An octopus and the Prime Minister lock eyes. The Prime Minister begins to sing...)
Prime Minister: (nervously fidgeting with a teacup)
"I’m a leader with scatterbrain flair,
But I see your tentacles dancing there.
The world’s ablaze, but my mind’s on you,
Could love be our treaty? Could this be true?"
Octopus: (tentacles gracefully sway, singing in a deep, soulful tone)
"Prime Minister, your words are strange,
But I feel your heart through this nuclear exchange.
Your focus may wander, but here you’re steady,
For love, my tentacles are always ready."
(They clasp hands—or tentacles—over a plate of scones. A peace summit of emotion begins.)
Duet: (harmonising as they butter scones together)
"We’ll save the world with tea and jam,
No bombs, no fear, just clotted cream glam.
An octopus, a PM, an unlikely pair,
But love will rewrite despair."
This is about half of what it came up with.
Pretty good, but unfinished.
I didn't want to flood the thread. It could write the whole 90-minute show.
Have to say that I am a little underwhelmed by the hype about AI.
We have experienced technical advances ever since we began to leave Africa all those millennia ago!
Wheels, agriculture, boats, writing, printing, guns, industrialisation, looms, railways, electricity, cameras, typewriters, cars, planes, radio, television, atomic power / bombs, radar, cheap refrigerators, birth control, photocopiers, computers, the internet, the f...ing mobile phone, goal line technology, ,VAR, "AI"?
Sung to "We didn't start the fire" or the Big Bang.
Most of these have had massive and lasting effects on employment and peoples ability to communicate.
But, in the end things just carry on and society changes a little.
For instance, we no longer have servants or hundreds of thousands of ladies who spend hour after hour typing and retyping documents, Is this good or bad? . But we do have more engineers and influencers. Is this good or bad?
"AI" is able to reformat a document with better syntax or look things up on the internet but I don't think this is a big deal.
A big change might eventually come when computers become intrinsically malicious, devious, manipulative and dishonest as well as clever and wealthy but this has little to do with "AI".
Unfortunately, this is an incredibly naive view of things. AI is a *generative* leap forward (and not for the better - at least not for the vast majority of humanity).
We're not talking about 'Electricity', or 'Antibiotics' or 'the Internet' here. We're talking about the replacement of around 40% of jobs over the next 20-25 years, on a planet that is already hurtling towards global conflict within that period, with the background of climate change rendering vast swathes of it pretty much uninhabitable, and all under the control of oligarchs puppetmastering populist wannabe (or actual) dictators.
I don't think you've fully grasped the scale of the issue - referring to 'reformatting a document'... 😬
I think stevexreeve has a point regarding hype around GenAI. We underestimate how much human input is needed to train these tools.
So much so that even the most anti-human of the tech billionaires, Elon Musk, is worried that we are running out of training data a.k.a. high quality human knowledge:
Only a tiny proportion of the world's workforce will fulfill that 'high quality human knowledge'.
Yes, that type of high-quality knowledge has always been produced by a tiny section of humanity, but I also include the corrections made to GenAI output by journalists, computer programmers, lawyers, mothers, wiki maintainers, plumbers, mechanics and so on, before pushing it back to the web from whence it gets harvested once again by GenAI for the next wave of training, in an endless loop that crucially needs humans.
In a sense, GenAI is nothing without vast amounts of human produced training data.
Have to say that I am a little underwhelmed by the hype about AI.
We have experienced technical advances ever since we began to leave Africa all those millennia ago!
Wheels, agriculture, boats, writing, printing, guns, industrialisation, looms, railways, electricity, cameras, typewriters, cars, planes, radio, television, atomic power / bombs, radar, cheap refrigerators, birth control, photocopiers, computers, the internet, the f...ing mobile phone, goal line technology, ,VAR, "AI"?
Sung to "We didn't start the fire" or the Big Bang.
Most of these have had massive and lasting effects on employment and peoples ability to communicate.
But, in the end things just carry on and society changes a little.
For instance, we no longer have servants or hundreds of thousands of ladies who spend hour after hour typing and retyping documents, Is this good or bad? . But we do have more engineers and influencers. Is this good or bad?
"AI" is able to reformat a document with better syntax or look things up on the internet but I don't think this is a big deal.
A big change might eventually come when computers become intrinsically malicious, devious, manipulative and dishonest as well as clever and wealthy but this has little to do with "AI".
Unfortunately, this is an incredibly naive view of things. AI is a *generative* leap forward (and not for the better - at least not for the vast majority of humanity).
We're not talking about 'Electricity', or 'Antibiotics' or 'the Internet' here. We're talking about the replacement of around 40% of jobs over the next 20-25 years, on a planet that is already hurtling towards global conflict within that period, with the background of climate change rendering vast swathes of it pretty much uninhabitable, and all under the control of oligarchs puppetmastering populist wannabe (or actual) dictators.
I don't think you've fully grasped the scale of the issue - referring to 'reformatting a document'... 😬
I think stevexreeve has a point regarding hype around GenAI. We underestimate how much human input is needed to train these tools.
So much so that even the most anti-human of the tech billionaires, Elon Musk, is worried that we are running out of training data a.k.a. high quality human knowledge:
Only a tiny proportion of the world's workforce will fulfill that 'high quality human knowledge'.
Yes, that type of high-quality knowledge has always been produced by a tiny section of humanity, but I also include the corrections made to GenAI output by journalists, computer programmers, lawyers, mothers, wiki maintainers, plumbers, mechanics and so on, before pushing it back to the web from whence it gets harvested once again by GenAI for the next wave of training, in an endless loop that crucially needs humans.
In a sense, GenAI is nothing without vast amounts of human produced training data.
You're missing the point. AI doesn't NEED Human-produced data when it can just create its own, and have it be reiterated as fact. LLMs needed human-produced data to begin with. They don't now - and companies are falling over themselves to enable it so they can save money (read: make people redundant). You already have populist governments in power across the globe who have no interest in facts. Imagine how bad it will be when people have no chance of finding the *actual* truth out about something because the 'alternative facts' provides by AI are indistinguishable from the 'actual facts'?
Most people are incredibly naive about this, and have no clue just how dangerous AI will be - because they're focusing on the 'intelligence' bit (which isn't actual intelligence, and won't be for decades) and not the 'artifical' bit (which is the real problem).
As soon as someone builds a reliable model that creates it’s own synthetic data autonomously then I agree we’re all toast. But we’re not there yet. And might never be.
As soon as someone builds a reliable model that creates it’s own synthetic data autonomously then I agree we’re all toast. But we’re not there yet. And might never be.
We're ALREADY there. AI is already creating it's own synthetic data. Facebook and Twitter are already full of AI slop that millions of people can't differentiate from reality. That data is being used to create other data. And all of it is being done without human input - indeed, Zuckerberg has literally stated in the past week that Meta will be creating artificial accounts with the express purpose of 'enhancing interactivity' (ie: showing more ads)
Have to say that I am a little underwhelmed by the hype about AI.
We have experienced technical advances ever since we began to leave Africa all those millennia ago!
Wheels, agriculture, boats, writing, printing, guns, industrialisation, looms, railways, electricity, cameras, typewriters, cars, planes, radio, television, atomic power / bombs, radar, cheap refrigerators, birth control, photocopiers, computers, the internet, the f...ing mobile phone, goal line technology, ,VAR, "AI"?
Sung to "We didn't start the fire" or the Big Bang.
Most of these have had massive and lasting effects on employment and peoples ability to communicate.
But, in the end things just carry on and society changes a little.
For instance, we no longer have servants or hundreds of thousands of ladies who spend hour after hour typing and retyping documents, Is this good or bad? . But we do have more engineers and influencers. Is this good or bad?
"AI" is able to reformat a document with better syntax or look things up on the internet but I don't think this is a big deal.
A big change might eventually come when computers become intrinsically malicious, devious, manipulative and dishonest as well as clever and wealthy but this has little to do with "AI".
Unfortunately, this is an incredibly naive view of things. AI is a *generative* leap forward (and not for the better - at least not for the vast majority of humanity).
We're not talking about 'Electricity', or 'Antibiotics' or 'the Internet' here. We're talking about the replacement of around 40% of jobs over the next 20-25 years, on a planet that is already hurtling towards global conflict within that period, with the background of climate change rendering vast swathes of it pretty much uninhabitable, and all under the control of oligarchs puppetmastering populist wannabe (or actual) dictators.
I don't think you've fully grasped the scale of the issue - referring to 'reformatting a document'... 😬
I think stevexreeve has a point regarding hype around GenAI. We underestimate how much human input is needed to train these tools.
So much so that even the most anti-human of the tech billionaires, Elon Musk, is worried that we are running out of training data a.k.a. high quality human knowledge:
Only a tiny proportion of the world's workforce will fulfill that 'high quality human knowledge'.
Yes, that type of high-quality knowledge has always been produced by a tiny section of humanity, but I also include the corrections made to GenAI output by journalists, computer programmers, lawyers, mothers, wiki maintainers, plumbers, mechanics and so on, before pushing it back to the web from whence it gets harvested once again by GenAI for the next wave of training, in an endless loop that crucially needs humans.
In a sense, GenAI is nothing without vast amounts of human produced training data.
You're missing the point. AI doesn't NEED Human-produced data when it can just create its own, and have it be reiterated as fact. LLMs needed human-produced data to begin with. They don't now - and companies are falling over themselves to enable it so they can save money (read: make people redundant). You already have populist governments in power across the globe who have no interest in facts. Imagine how bad it will be when people have no chance of finding the *actual* truth out about something because the 'alternative facts' provides by AI are indistinguishable from the 'actual facts'?
Most people are incredibly naive about this, and have no clue just how dangerous AI will be - because they're focusing on the 'intelligence' bit (which isn't actual intelligence, and won't be for decades) and not the 'artifical' bit (which is the real problem).
- Training GenAI with its own output leads to dilution of real content, expressed in grammatical fluff, after only a few iterations, with very little connection with recent news, events or even reality.
- I doubt the BBC, Wikipedia, the Telegraph, the Guardian, the FT and so on will be publishing unedited GenAI output, so I would continue trusting those sources more than less reputable ones, and hopefully so would reputable GenAI tools.
- I agree X/Facebook/social media would publish basically unedited content, and there's a big danger with rewriting history, presenting alternative facts, distorted/misleading perspectives, etc, but that's an issue with social media in general.
Have to say that I am a little underwhelmed by the hype about AI.
We have experienced technical advances ever since we began to leave Africa all those millennia ago!
Wheels, agriculture, boats, writing, printing, guns, industrialisation, looms, railways, electricity, cameras, typewriters, cars, planes, radio, television, atomic power / bombs, radar, cheap refrigerators, birth control, photocopiers, computers, the internet, the f...ing mobile phone, goal line technology, ,VAR, "AI"?
Sung to "We didn't start the fire" or the Big Bang.
Most of these have had massive and lasting effects on employment and peoples ability to communicate.
But, in the end things just carry on and society changes a little.
For instance, we no longer have servants or hundreds of thousands of ladies who spend hour after hour typing and retyping documents, Is this good or bad? . But we do have more engineers and influencers. Is this good or bad?
"AI" is able to reformat a document with better syntax or look things up on the internet but I don't think this is a big deal.
A big change might eventually come when computers become intrinsically malicious, devious, manipulative and dishonest as well as clever and wealthy but this has little to do with "AI".
Unfortunately, this is an incredibly naive view of things. AI is a *generative* leap forward (and not for the better - at least not for the vast majority of humanity).
We're not talking about 'Electricity', or 'Antibiotics' or 'the Internet' here. We're talking about the replacement of around 40% of jobs over the next 20-25 years, on a planet that is already hurtling towards global conflict within that period, with the background of climate change rendering vast swathes of it pretty much uninhabitable, and all under the control of oligarchs puppetmastering populist wannabe (or actual) dictators.
I don't think you've fully grasped the scale of the issue - referring to 'reformatting a document'... 😬
I think stevexreeve has a point regarding hype around GenAI. We underestimate how much human input is needed to train these tools.
So much so that even the most anti-human of the tech billionaires, Elon Musk, is worried that we are running out of training data a.k.a. high quality human knowledge:
Only a tiny proportion of the world's workforce will fulfill that 'high quality human knowledge'.
Yes, that type of high-quality knowledge has always been produced by a tiny section of humanity, but I also include the corrections made to GenAI output by journalists, computer programmers, lawyers, mothers, wiki maintainers, plumbers, mechanics and so on, before pushing it back to the web from whence it gets harvested once again by GenAI for the next wave of training, in an endless loop that crucially needs humans.
In a sense, GenAI is nothing without vast amounts of human produced training data.
You're missing the point. AI doesn't NEED Human-produced data when it can just create its own, and have it be reiterated as fact. LLMs needed human-produced data to begin with. They don't now - and companies are falling over themselves to enable it so they can save money (read: make people redundant). You already have populist governments in power across the globe who have no interest in facts. Imagine how bad it will be when people have no chance of finding the *actual* truth out about something because the 'alternative facts' provides by AI are indistinguishable from the 'actual facts'?
Most people are incredibly naive about this, and have no clue just how dangerous AI will be - because they're focusing on the 'intelligence' bit (which isn't actual intelligence, and won't be for decades) and not the 'artifical' bit (which is the real problem).
- Training GenAI with its own output leads to dilution of real content, expressed in grammatical fluff, after only a few iterations, with very little connection with recent news, events or even reality.
- I doubt the BBC, Wikipedia, the Telegraph, the Guardian, the FT and so on will be publishing unedited GenAI output, so I would continue trusting those sources more than less reputable ones, and hopefully so would reputable GenAI tools.
- I agree X/Facebook/social media would publish basically unedited content, and there's a big danger with rewriting history, presenting alternative facts, distorted/misleading perspectives, etc, but that's an issue with social media in general.
Access to accurate information in the media is of some concern of course, and legitimate media companies will need human checking. But that isn't the only issue. The main issue for jobs is AI replacing humans and corporations will develop their own AI systems to do that without the need or relevance of accurate media, mainstream or otherwise. Work process with finite and appropriate content - we are no taking about search engines, we are talking about work processes and outcomes.
You only have to visit X or TikTok for five minutes to see how much misinformation and outright lies are being put out there by people. It was realised that social media was a very powerful tool and has without doubt has influenced the outcomes in elections and referenda. If AI is taking the misinformation out there and recycling that as valid information then in no time it will be impossible to distinguish fact from fiction. It’s a massive concern. What’s more of a concern is that it would appear that there is no shortage of people that are fully invested in the lies and misinformation.
I haven't taken much notice of this up to now, but with a lot of discussion in the media on the reduction in fact checking on some social media platforms, I wondered how this will impact AI.
I understand that these AI systems gather some of their info from social media, and with so much misinformation on those platforms, how will we be able to rely on the information that comes out of AI?
AI, like humans, will gather information from many sources, although some humans rely on their small sample of perceived 'trusted' sources. However AI, by the definition of its searching power, will likely look at many more sources than a typical human. Even the most trusted sources will contain misinformation - either by design or by error. Then there is the different interpretations of the same data.
I just asked Chat GTP a question to which I knew the answer - Where is Bob Dylan's song "Just Like Tom Thumb's Blues" set? It gave me five successive incorrect answers including New Orleans, Bogota, New York City before I pointed to the give-away in Line 1 of the song "When you're lost in the rain in Juarez". It apologised and hoped I would excuse it. Having previously been impressed by what I had asked it, this run of inaccurate answers shook my trust in it. When I previously asked it about French Resistance in the Haute Loire, and the rights of dual British/French citizens re. medical access, I took it on trust that the information was correct. Never again.
Have to say that I am a little underwhelmed by the hype about AI.
We have experienced technical advances ever since we began to leave Africa all those millennia ago!
Wheels, agriculture, boats, writing, printing, guns, industrialisation, looms, railways, electricity, cameras, typewriters, cars, planes, radio, television, atomic power / bombs, radar, cheap refrigerators, birth control, photocopiers, computers, the internet, the f...ing mobile phone, goal line technology, ,VAR, "AI"?
Sung to "We didn't start the fire" or the Big Bang.
Most of these have had massive and lasting effects on employment and peoples ability to communicate.
But, in the end things just carry on and society changes a little.
For instance, we no longer have servants or hundreds of thousands of ladies who spend hour after hour typing and retyping documents, Is this good or bad? . But we do have more engineers and influencers. Is this good or bad?
"AI" is able to reformat a document with better syntax or look things up on the internet but I don't think this is a big deal.
A big change might eventually come when computers become intrinsically malicious, devious, manipulative and dishonest as well as clever and wealthy but this has little to do with "AI".
Unfortunately, this is an incredibly naive view of things. AI is a *generative* leap forward (and not for the better - at least not for the vast majority of humanity).
We're not talking about 'Electricity', or 'Antibiotics' or 'the Internet' here. We're talking about the replacement of around 40% of jobs over the next 20-25 years, on a planet that is already hurtling towards global conflict within that period, with the background of climate change rendering vast swathes of it pretty much uninhabitable, and all under the control of oligarchs puppetmastering populist wannabe (or actual) dictators.
I don't think you've fully grasped the scale of the issue - referring to 'reformatting a document'... 😬
I think stevexreeve has a point regarding hype around GenAI. We underestimate how much human input is needed to train these tools.
So much so that even the most anti-human of the tech billionaires, Elon Musk, is worried that we are running out of training data a.k.a. high quality human knowledge:
Only a tiny proportion of the world's workforce will fulfill that 'high quality human knowledge'.
Yes, that type of high-quality knowledge has always been produced by a tiny section of humanity, but I also include the corrections made to GenAI output by journalists, computer programmers, lawyers, mothers, wiki maintainers, plumbers, mechanics and so on, before pushing it back to the web from whence it gets harvested once again by GenAI for the next wave of training, in an endless loop that crucially needs humans.
In a sense, GenAI is nothing without vast amounts of human produced training data.
You're missing the point. AI doesn't NEED Human-produced data when it can just create its own, and have it be reiterated as fact. LLMs needed human-produced data to begin with. They don't now - and companies are falling over themselves to enable it so they can save money (read: make people redundant). You already have populist governments in power across the globe who have no interest in facts. Imagine how bad it will be when people have no chance of finding the *actual* truth out about something because the 'alternative facts' provides by AI are indistinguishable from the 'actual facts'?
Most people are incredibly naive about this, and have no clue just how dangerous AI will be - because they're focusing on the 'intelligence' bit (which isn't actual intelligence, and won't be for decades) and not the 'artifical' bit (which is the real problem).
- Training GenAI with its own output leads to dilution of real content, expressed in grammatical fluff, after only a few iterations, with very little connection with recent news, events or even reality.
- I doubt the BBC, Wikipedia, the Telegraph, the Guardian, the FT and so on will be publishing unedited GenAI output, so I would continue trusting those sources more than less reputable ones, and hopefully so would reputable GenAI tools.
- I agree X/Facebook/social media would publish basically unedited content, and there's a big danger with rewriting history, presenting alternative facts, distorted/misleading perspectives, etc, but that's an issue with social media in general.
Access to accurate information in the media is of some concern of course, and legitimate media companies will need human checking. But that isn't the only issue. The main issue for jobs is AI replacing humans and corporations will develop their own AI systems to do that without the need or relevance of accurate media, mainstream or otherwise. Work process with finite and appropriate content - we are no taking about search engines, we are talking about work processes and outcomes.
No doubt GenAI and AI will lead to job losses, but I think the creative industries are specially at risk, largely because weird output or hallucinations are not necessarily a bad thing.
For other applications, the impact is less clear as it is more of a tool needing 'supervision' because accuracy and relevance are essential.
As soon as someone builds a reliable model that creates it’s own synthetic data autonomously then I agree we’re all toast. But we’re not there yet. And might never be.
We're ALREADY there. AI is already creating its own synthetic data. Facebook and Twitter are already full of AI slop that millions of people can't differentiate from reality. That data is being used to create other data. And all of it is being done without human input - indeed, Zuckerberg has literally stated in the past week that Meta will be creating artificial accounts with the express purpose of 'enhancing interactivity' (ie: showing more ads)
There’s lots of talk of it (see Zuckerberg) but I’ve yet to see an actual implementation that stands, yet to see a model trained on synthetic data that doesn’t fall apart at the seams before it even makes it off the ground.
Anything trained on data that isn’t sufficiently sanitised tends to nosedive into hallucinations and nonsense.
Interestingly, Chat GTP has not learned from its mistakes and the correct information I gave it three times with regards to the setting of "Just Like Tom Thumb's Blues" and it continues to pump out nonsense about the song - nonsense that varies each time I ask.
Does anybody else vaguely remember the time when most believed that if something was in print then it must be true. Then people believed that anything they saw on TV must be true.
Now people believe that anything they read or see on the computer must be true.
Can't really see that lasting very long now we have AI.
People (especially the younger ones) aren't as stupid as computers try to make us believe!
Interestingly, Chat GTP has not learned from its mistakes and the correct information I gave it three times with regards to the setting of "Just Like Tom Thumb's Blues" and it continues to pump out nonsense about the song - nonsense that varies each time I ask.
It's interesting to note how many people assume that the tool they're using is in error, without considering, for example, they're using the wrong tool. If you "know" a "fact" to be "true", then using ChatGPT to establish that you're right is worthless.
I haven't taken much notice of this up to now, but with a lot of discussion in the media on the reduction in fact checking on some social media platforms, I wondered how this will impact AI.
I understand that these AI systems gather some of their info from social media, and with so much misinformation on those platforms, how will we be able to rely on the information that comes out of AI?
AI, like humans, will gather information from many sources, although some humans rely on their small sample of perceived 'trusted' sources. However AI, by the definition of its searching power, will likely look at many more sources than a typical human. Even the most trusted sources will contain misinformation - either by design or by error. Then there is the different interpretations of the same data.
As soon as someone builds a reliable model that creates it’s own synthetic data autonomously then I agree we’re all toast. But we’re not there yet. And might never be.
We're ALREADY there. AI is already creating its own synthetic data. Facebook and Twitter are already full of AI slop that millions of people can't differentiate from reality. That data is being used to create other data. And all of it is being done without human input - indeed, Zuckerberg has literally stated in the past week that Meta will be creating artificial accounts with the express purpose of 'enhancing interactivity' (ie: showing more ads)
There’s lots of talk of it (see Zuckerberg) but I’ve yet to see an actual implementation that stands, yet to see a model trained on synthetic data that doesn’t fall apart at the seams before it even makes it off the ground.
Anything trained on data that isn’t sufficiently sanitised tends to nosedive into hallucinations and nonsense.
The problem is that history, including quite a lot from very recent times, suggests that a great many people are all to happy to frame their world views on hallucinations and nonsense.
Steering a little from ChatGPT and to more general AI, has anyone taken in the govt 'AI Opportunities Actions plan'. 50 actions points to make sure Britain remains a player in the sphere of AI
Counter this to the fact that last year 6500 Millionaires exited the UK, with 20k expected this year (mostly due to tax hikes) I can't see where and how we have the entrepreneurial spirit to make this happen
Interestingly, Chat GTP has not learned from its mistakes and the correct information I gave it three times with regards to the setting of "Just Like Tom Thumb's Blues" and it continues to pump out nonsense about the song - nonsense that varies each time I ask.
It's interesting to note how many people assume that the tool they're using is in error, without considering, for example, they're using the wrong tool. If you "know" a "fact" to be "true", then using ChatGPT to establish that you're right is worthless.
The skill is asking the right question in the right manner.
Comments
This is about half of what it came up with.
How about the song itself: https://www.udio.com/songs/bcqWuDcCn54s1eoQYKuERU
So was Beethoven's 10th, as was The Canterbury Tales!
In a sense, GenAI is nothing without vast amounts of human produced training data.
Most people are incredibly naive about this, and have no clue just how dangerous AI will be - because they're focusing on the 'intelligence' bit (which isn't actual intelligence, and won't be for decades) and not the 'artifical' bit (which is the real problem).
- I doubt the BBC, Wikipedia, the Telegraph, the Guardian, the FT and so on will be publishing unedited GenAI output, so I would continue trusting those sources more than less reputable ones, and hopefully so would reputable GenAI tools.
- I agree X/Facebook/social media would publish basically unedited content, and there's a big danger with rewriting history, presenting alternative facts, distorted/misleading perspectives, etc, but that's an issue with social media in general.
Access to accurate information in the media is of some concern of course, and legitimate media companies will need human checking. But that isn't the only issue. The main issue for jobs is AI replacing humans and corporations will develop their own AI systems to do that without the need or relevance of accurate media, mainstream or otherwise. Work process with finite and appropriate content - we are no taking about search engines, we are talking about work processes and outcomes.
For other applications, the impact is less clear as it is more of a tool needing 'supervision' because accuracy and relevance are essential.
Then people believed that anything they saw on TV must be true.
Now people believe that anything they read or see on the computer must be true.
Can't really see that lasting very long now we have AI.
People (especially the younger ones) aren't as stupid as computers try to make us believe!
He said, being very late to the party as always.
https://www.gov.uk/government/publications/ai-opportunities-action-plan/ai-opportunities-action-plan#fnref:14
Counter this to the fact that last year 6500 Millionaires exited the UK, with 20k expected this year (mostly due to tax hikes) I can't see where and how we have the entrepreneurial spirit to make this happen
The skill is asking the right question in the right manner.