Say it with me again now:
For fact-based applications, the amount of work required to develop and subsequently babysit the LLM to ensure it is always producing accurate output is exactly the same as doing the work yourself in the first place.
Always, always, always. This is a mathematical law. It doesn’t matter how much you whine or argue, or cite anecdotes about how you totally got ChatGPT or Copilot to generate you some working code that one time. The LLM does not actually have comprehension of its input or output. It doesn’t have comprehension, period. It cannot know when it is wrong. It can’t actually know anything.
Sure, very sophisticated LLM’s might get it right some of the time, or even a lot of the time in the cases of very specific topics with very good training data. But its accuracy cannot be guaranteed unless you fact-check 100% of its output.
Underpaid employees were asked to feed published articles from other news services into generative AI tools and spit out paraphrased versions. The team was soon using AI to churn out thousands of articles a day, most of which were never fact-checked by a person. Eventually, per the NYT, the website’s AI tools randomly started assigning employees’ names to AI-generated articles they never touched.
Yep, that right there. I could have called that before they even started. The shit really hits the fan when the computer is inevitably capable of spouting bullshit far faster than humans are able to review and debunk its output, and that’s only if anyone is actually watching and has their hand on the off switch. Of course, the end goal of these schemes is to be able to fire as much of the human staff as possible, so it ultimately winds up that there is nobody left to actually do the review. And whatever emaciated remains of management are left don’t actually understand how the machine works nor how its output is generated.
Yeah, I see no flaws in this plan… Carry the fuck on, idiots.
Did you enjoy humans spouting bullshit faster than humans can debunk it? Well, brace for impact because here comes machine-generated bullshit! Wooooeee’refucked! 🥳
To err is human. But to really fuck up, you need a computer.
A human can only do bad or dumb things so quickly.
A human writing code can do bad or dumb things at scale, as well as orders of magnitude more quickly.
Okay, yes I agree with you fully, but you can’t just say it’s a mathematical law without proof, that’s something you need to back up with numbers and I don’t think “work” is quantifiable.
Again, yes, they need to slow down, but I have an issue with your claim unless you’re going to be backing it up. Otherwise you’re just a crazy dude standing on a soapbox
Your statement is technically true but wrong in practice. Because your statement applies to EVERYTHING on the Internet. We had tons of error ridden garbage articles written by underpaid interns long before AI.
And no, fact checking is quicker than writing something from scratch. Just like verifying Wikipedia sources is quicker than writing a Wikipedia article.
I think it’s worse than that. The work is about the same. The skill and pay for that work? Lower.
Why pay 10 experienced journalists when you can pay 10 expendable fact checkers who just need to run some facts/numbers by a Wikipedia page?
I can see how it might be seen as more facile to correct/critique than to produce the original work. This is actually true, same as how its easier to iterate on something than to wholesale create the thing.
Definitely find it easier to extend or elaborate on something “old” over crapping out a new thing, altho I can see how that is not always the case if its too “legacy”. ChatGPT is intriguing because it can arguably modularly generate many of the parts, you would just need to glue them together properly and ensure all the outputs are cohesive and coherent
For example: if you’re a lawyer and you generate anything, you must at the very least
- Read, not dictate
- Ensure all caselaw cited a) definitely exists and b) is relevant to the facts and arguments they are being used to support
The cost however is not the same. I can totally see the occasional lawsuit as the cost of doing business for a company that employs AI.
While that works for “news agencies” it’s a free money glitch when used in a customer support role for the consumer.
Edit: clarification
Pretty sure an airline was forced to pay out on a fake policy that one of their support bots spouted.
Always, always, always. This is a mathematical law.
Total bullshit. We use LLMs at work for tasks that would be nearly impossible and require obscene amounts of manpower to do by hand.
Yes we have to check the output, but its not even close to the amount of work to do it by hand. Like, by orders of magnitude.
Llms are useful for recalling from a fixed corpus where you dictate they cite their source.
They are ideal for human in the loop research solutions.
The whole “answer anything about anything” concept is dumb.
I disagree with the “always” bit. At some point in the future AI is actually going to get to the point where we can basically just leave it to it, and not have to worry.
But I do agree that we are not there yet. And that we need to stop pretending that we are.
Having said that my company uses AI for a lot of business critical tasks and we haven’t gone bankrupt yet, of course that’s not quite the same as saying that a human wouldn’t have done it better. Perhaps we’re spending more money than we need to because of the AI, who knows?
…Nnnnno, actually always.
The current models that are in use now (and the subject of the article) are not actual AI’s. There is no thinking going on in there. They are statistical language models that are literally incapable of producing anything that was not originally part of their training input data, reassembled and strung together different ways. These LLM models can’t actually generate new content, they can’t think up anything novel, and of course they can’t actually think at all. They are completely at the mercy of whatever garbage is fed into them and are by definition not capable of actually “understanding” their output because they are not capable of understanding at all. The nature of these processes being a statistical model also means that the output is to some extent always dependent on an internal dice roll as well, and the possibility of rolling snake eyes is always there no matter how clever or well tuned the algorithm is.
This is not to say humans are infallible, either, but at least we are conceptually capable of understanding when and more importantly how we got something wrong when called on it. We are also capable of researching sources and weighing the validity of different sources and/or claims, which an LLM is not – not without human intervention, anyway, which loops back to my original point about doing the work yourself in the first place. An LLM cannot determine if a published sequence of words is bogus. It can of course string together a new combination of words in a syntactically valid manner that can be read and will make sense, but the truth of the constructed text cannot actually be determined programmatically. So in any application where accuracy is necessary, it is downright required to thoroughly review 100% of the machine output to verify that it is factual and correct. For anyone capable of doing that without smoke coming out of their own ears, it is then trivial to take the next step and just reproduce what the machine did for you. Yes, you may as well have just done it yourself. The only real advantage the machine has is that it can type faster than you and it never needs more coffee.
The only way to cast off these limitations would be to develop an entirely new real AI model that is genuinely capable of understanding the meaning of both its input and output, and legitimately capable of drawing new conclusions from its own output also taking into account additional external data when presented with it. And being able to show its work, so to speak, to demonstrate how it arrived at its conclusions to back up their factual validity. This requires throwing away the current LLM models completely – they are a technological dead end. They’re neat, and capable of fooling some of the people some of the time, but on a mathematical level they’re never capable of achieving internally provable, consistent truth.
I hope he wins, and the fine makes Microsoft’s eyes water. Everyone need to slow the fuck down with this, and they won’t until there are real painful consequences.
MS can drop billions on game company acquisitions like it’s no big deal? Cool, give this guy 1 billion dollars for randomly singling him out and automated-accusing him of sex crimes.
Maybe then all the tech bros might pause for 3 seconds before they keep feeding shit into their models illegally.
This US election was going to be a no-good-choices shitshow no matter what. But I really dread the AI-amped shitshow we’re gonna get.
This is the best summary I could come up with:
Worse yet, the erroneous reporting was scooped up by MSN — the somehow not-dead-yet Microsoft site that aggregates news — and was featured on its homepage for several hours before being taken down.
It’s an unfortunate example of the tangible harms that arise when AI tools implicate real people in bad information as they confidently — and convincingly — weave together fact and fiction.
And if Bigfoot conspiracies slip through MSN’s very large and automated cracks, it’s not surprising that a real-enough-looking AI-generated article like “Prominent Irish broadcaster faces trial over alleged sexual misconduct” made it onto the site’s homepage.
According to the NYT, the website was founded by an alleged abuser and tech entrepreneur named Gurbaksh Chahal, who billed BNN as “a revolution in the journalism industry.”
Underpaid employees were asked to feed published articles from other news services into generative AI tools and spit out paraphrased versions.
Eventually, per the NYT, the website’s AI tools randomly started assigning employees’ names to AI-generated articles they never touched.
The original article contains 559 words, the summary contains 167 words. Saved 70%. I’m a bot and I’m open source!
And now I’m reading a computer’s version of a story describing how a computer wrote a story that should have been discarded.
It’s even better than that. It’s a computer’s version of a story describing how a computer wrote a story which was then front-paged by a computer.