I wonder if (or, more accurately hope that) this kind of slop will eventually die out as people realise how little care is put into it. I am more and more convinced that if the devil existed he'd take care of the bigger stuff, but have an army of little devils that encourage people to do things like make unsupervised automated podcasts about knitting, relentlessly chipping away at the messy joys of living.
I remember this kind of slop from times well before the LLM explosion.
I'm specifically thinking of a print magazine that was designed to make you feel like you are a smart reader of science articles, without any useful information about the actual science or technology.
Why does this site want to access apps and services on my local network?
On topic, I do wonder how "the market" is going to sort this out. At this moment I'm leaning towards just banning this shit, but maybe there is a better way?
We can already see the market in action. Increasingly people are more hostile to online content and influencers, except for the few people they follow, just like everyone was already defensive against unsolicited email. Authenticity will become valuable in a sea of slop, and making high budget productions (think Mr Beast) will be worth nothing since it can be easily faked and hard to distinguish.
Extremely long winded. I think this person is trying to throw stones at someone else’s work, but their own is so elliptical I lost the will to find out.
Not taking away the right to your opinion, but I couldn't disagree more; I found it an excellent sociological article. One, it takes the formal concept of "bullshit" and applies it to knitting in a very methodical and strict manner. I found it novel and convincing, and the examples were great; not contrived or forced at all. IMO it was much better than many academic books or articles; an immediate share.
Two, the turns of logic are clearly laid out, in a conversational way, which would make it easy to stick a wrench in and form a polemic if you found any of her arguments or logical implications specious. That said, that does make the article quite long. But then, it is anything other than "elliptical", which I think you used as "runs in circles and repeats itself often", while it actually means "omits parts and thus is difficult to understand" (like the ellipsis sign: …).
Also: what the heck is wrong with that podcast farm founder. I hope they have a bad year.
You only had to reach the second paragraph to find the example of an 8-person company that uses AI to generate “about 3000 podcast episodes per week, hosted by AI personalities.”
I was a couple of images in before I sussed it. Bullshit images, but pleasing enough to look at. Without the images, it would have either been a big wall of text, which would have put me off reading, though I did give up about 25% of the way through after sussing the images and thus the incoherence in the argument.
The images bring something to the article. They were cheap/quick to generate. The increase the potential payoff (more reader) without significantly increasing the cost. Without the images, the payoff(readers) would likely have been lower, below the cost of actually writing the article. Same goes for a history of knitting podcast or that video. Production costs would not be worth it for a very niche viewership.
Reading that made me feel like you wanted to be contrarian from the get-go and dismiss the article with the least effort possible. The whole point of the images is that they're low-effort AI slop, it's part of what she's trying to point to when someone is generating unsupervised automated podcasts about knitting.
So you're saying you can spot AI generated bullshit, but not spot a deliberate and hilarious contrivance that the author uses to reinforce their point?
TL;DR: there are brainrot farms with help from AI.
But I saw this one coming three or four years ago.
Actually, I've been listening to AI-generated brainrot music. I prefer it to some human-generated brainrot music (there's "I Hate Boys" from Christina Aguilera. Sorry if you are a fan).
Brainrot serves a specific social purpose: relieving stress, incoherently winning elections. It's a kind of drug that dulls the dangerous part of the brain while leaving the he-is-a-good-tool and she-is-blonde brain hemispheres in working order.
In fact, I do believe that if there were to be an uprising in a couple of decades against AI, and the human side were to rise victorious, the aftermath's social order would be studiously anti-AI and anti-science, but they would make a carve-out for AI brainrot (yes, I published a short fiction story with that premise, because I'm brainrot-vers).
ummmm, WOW!, hey that clicks
your brainrot/drug description is good.
making a choice for zero human content and therfore interaction.
the full suite of options would include perfectly artificial scents.
personaly, I am way over in the analog/organic direction, but I get the need
to disconect from the "whatever this is™" that passes for a society.
the question remains for AI scaling to meet the demands and desires society has always placed on indivuals
the audible exasperated noise comming from the person in line with me, seeing me pull out cash, thereby breaking there own perfect
little automated world, mearly by bieng subjected to witnessing such a primitive ritual, not behind me I might add, the person leaving in front of me, is the prime
example of someone who will violently reject AI and the rest when it inevitably fails to "fix" everything
Are you serious when you connect anti-AI sentiment to anti-science sentiment?
To me, they are opposite sentiments, and my experience discussing AI with others supports this. The most pro-AI people I meet are very far removed from science, and my research colleagues are definitely more critical of AI than not.
I'm specifically thinking of a print magazine that was designed to make you feel like you are a smart reader of science articles, without any useful information about the actual science or technology.
On topic, I do wonder how "the market" is going to sort this out. At this moment I'm leaning towards just banning this shit, but maybe there is a better way?
Two, the turns of logic are clearly laid out, in a conversational way, which would make it easy to stick a wrench in and form a polemic if you found any of her arguments or logical implications specious. That said, that does make the article quite long. But then, it is anything other than "elliptical", which I think you used as "runs in circles and repeats itself often", while it actually means "omits parts and thus is difficult to understand" (like the ellipsis sign: …).
Also: what the heck is wrong with that podcast farm founder. I hope they have a bad year.
from TFA: "All of the images in this post were generated by an ai in response to the simple two-word prompt “lovely knitting”
Edit: ps: Kate Davies is an actual creator who has been creating knitting patterns for years.
But I saw this one coming three or four years ago.
Actually, I've been listening to AI-generated brainrot music. I prefer it to some human-generated brainrot music (there's "I Hate Boys" from Christina Aguilera. Sorry if you are a fan).
Brainrot serves a specific social purpose: relieving stress, incoherently winning elections. It's a kind of drug that dulls the dangerous part of the brain while leaving the he-is-a-good-tool and she-is-blonde brain hemispheres in working order.
In fact, I do believe that if there were to be an uprising in a couple of decades against AI, and the human side were to rise victorious, the aftermath's social order would be studiously anti-AI and anti-science, but they would make a carve-out for AI brainrot (yes, I published a short fiction story with that premise, because I'm brainrot-vers).
the full suite of options would include perfectly artificial scents. personaly, I am way over in the analog/organic direction, but I get the need to disconect from the "whatever this is™" that passes for a society. the question remains for AI scaling to meet the demands and desires society has always placed on indivuals
the audible exasperated noise comming from the person in line with me, seeing me pull out cash, thereby breaking there own perfect little automated world, mearly by bieng subjected to witnessing such a primitive ritual, not behind me I might add, the person leaving in front of me, is the prime example of someone who will violently reject AI and the rest when it inevitably fails to "fix" everything
To me, they are opposite sentiments, and my experience discussing AI with others supports this. The most pro-AI people I meet are very far removed from science, and my research colleagues are definitely more critical of AI than not.