You can hardly get online these days without hearing some AI booster talk about how AI coding is going to replace human programmers. AI code is absolutely up to production quality! Also, you’re all…
The general comments that Ben received were that experienced developers can use AI for coding with positive results because they know what they’re doing. But AI coding gives awful results when it’s used by an inexperienced developer. Which is what we knew already.
That should be a big warning sign that the next generation of developers are not going to be very good. If they’re waist deep in AI slop, they’re only going to learn how to deal with AI slop.
As a non-programmer, I have zero understanding of the code and the analysis and fully rely on AI and even reviewed that AI analysis with a different AI to get the best possible solution (which was not good enough in this case).
What I’m feeling after reading that must be what artists feel like when AI slop proponents tell them “we’re making art accessible”.
If people say that sort of thing around you not as a joke, you need to spend your time with better people. I dunno what to tell you - humor is a great way to deal with shitty things in life. Dunno why you would want to get rid of it.
So, there’s this new phenomenon they’ve observed in which text does not convey tone. It can be a real problem, especially when a statement made by one person as a joke would be made by another in all seriousness — but don’t worry, solutions have very recently been proposed.
What people want when they say “AI is making art accessible” is they want high quality professional art for dirt cheap.
…and what their opposition means when they oppose it is “this line of work was supposed to be totally immune to automation, and I’m mad that it turns out not to be.”
There is already a lot of automation out there, and more is better, when used correctly. And that’s not talking about the outright theft of the material from these artists it is trying to replace so badly.
See I would frame it as practicioners of some of the last few non-bullshit jobs (minimally bullshit jobs) - fields that by necessity require a kind of craft or art that is meaningful or rewarding - being routed around by economic forces that only wanted their work for bullshit results. Like, no matter how passionate you are about graphic design you probably didn’t get into the field because shuffling the visuals every so often is X% better for customer engagement and conversion or whatever. But the businesses buying graphic design work are more interested in that than they ever were in making something beautiful or functional, and GenAI gives them the ability to get what they want more cheaply. As an unexpected benefit they also don’t have to see you roll your eyes when they tell you it needs to be “more blue” and as an insignificant side effect it brings our culture one step closer to finally drowning the human soul in shit to advance the cause of glorious industry in it’s unceasing march to An Even Bigger Number.
I dunno. I feel like the programmers who came before me could say the same thing about IDEs, Stack Overflow, and high level programming languages. Assembly looks like gobbledygook to me and they tell me I’m a Senior Dev.
If someone uses ChatGPT like I use StackOverflow, I’m not worried. We’ve been stealing code from each other since the beginning.“Getting the answer” and then having to figure out how to plug it into the rest of the code is pretty much what we do.
There isn’t really a direct path from an LLM to a good programmer. You can get good snippets, but “ChatGPT, build me a app” will be largely useless. The programmers who come after me will have to understand how their code works just as much as I do.
I mean past a certain point LLMs are strictly worse tools than Stack Overflow was on its worst day. IDEs have a bunch of features to help manage complexity and offload memorization. The fundamental task of understanding the code you’re writing is still yours. Stack Overflow and other forums are basically crowdsourced mentorship programs. Someone out there knows the thing you need to and rather than cultivate a wide social network you can take advantage of mass communication. To use it well you still need to know what’s happening, and if you don’t you can at least trust that the information is out there somewhere that you might be able to follow up on as needed. LLM assistants are designed to create output that looks plausible and to tell the user what they want to hear. If the user is an idiot the LLM will do nothing to make them recognize that they’re doing something wrong, much less help them fix it.
LLM are terrible because the data they were trained on is garbage, because companies don’t want to pay for people to create a curated dataset to produce acceptable results.
The tech itself can be good in specific cases. But the way it is shoved in everything right now is terrible
That should be a big warning sign that the next generation of developers are not going to be very good. If they’re waist deep in AI slop, they’re only going to learn how to deal with AI slop.
What I’m feeling after reading that must be what artists feel like when AI slop proponents tell them “we’re making art accessible”.
Sounds like job security to me!
“I want the people I teach to be worse than me” is a fucking nightmare of a want, I hope you learn to do better
So there’s this new thing they invented. It’s called a joke. You should try them out sometime, they’re fun!
“oh shit I got called out on my shitty haha-only-serious comment, better pretend I didn’t mean it!” cool story bro
If people say that sort of thing around you not as a joke, you need to spend your time with better people. I dunno what to tell you - humor is a great way to deal with shitty things in life. Dunno why you would want to get rid of it.
jesus fuck how do you fail to understand any post of this kind this badly
maybe train your model better! I know I know, they were already supposed to be taking over the world… alas…
“How dare you not find me funny. I’m going to lecture you on humor. The lectures will continue until morale improves.”
So, there’s this new phenomenon they’ve observed in which text does not convey tone. It can be a real problem, especially when a statement made by one person as a joke would be made by another in all seriousness — but don’t worry, solutions have very recently been proposed.
I dunno what kind of world you are living in where someone would make my comment not as a joke. Please find better friends.
you’re as funny as the grave
Art is already accessible. Plenty of artists that sells their art dirt cheap, or you can buy pen and papers at the dollar store.
What people want when they say “AI is making art accessible” is they want high quality professional art for dirt cheap.
…and what their opposition means when they oppose it is “this line of work was supposed to be totally immune to automation, and I’m mad that it turns out not to be.”
There is already a lot of automation out there, and more is better, when used correctly. And that’s not talking about the outright theft of the material from these artists it is trying to replace so badly.
…and this opposition means that our disagreements can only be perceived through the lens of personal faults.
See I would frame it as practicioners of some of the last few non-bullshit jobs (minimally bullshit jobs) - fields that by necessity require a kind of craft or art that is meaningful or rewarding - being routed around by economic forces that only wanted their work for bullshit results. Like, no matter how passionate you are about graphic design you probably didn’t get into the field because shuffling the visuals every so often is X% better for customer engagement and conversion or whatever. But the businesses buying graphic design work are more interested in that than they ever were in making something beautiful or functional, and GenAI gives them the ability to get what they want more cheaply. As an unexpected benefit they also don’t have to see you roll your eyes when they tell you it needs to be “more blue” and as an insignificant side effect it brings our culture one step closer to finally drowning the human soul in shit to advance the cause of glorious industry in it’s unceasing march to An Even Bigger Number.
I dunno. I feel like the programmers who came before me could say the same thing about IDEs, Stack Overflow, and high level programming languages. Assembly looks like gobbledygook to me and they tell me I’m a Senior Dev.
If someone uses ChatGPT like I use StackOverflow, I’m not worried. We’ve been stealing code from each other since the beginning.“Getting the answer” and then having to figure out how to plug it into the rest of the code is pretty much what we do.
There isn’t really a direct path from an LLM to a good programmer. You can get good snippets, but “ChatGPT, build me a app” will be largely useless. The programmers who come after me will have to understand how their code works just as much as I do.
LLM as another tool is great. LLM to replace experienced coders is a nightmare waiting to happen.
IDEs, stack overflow, they are tools that makes the life of a developers a lot easier, they don’t replace him.
I mean past a certain point LLMs are strictly worse tools than Stack Overflow was on its worst day. IDEs have a bunch of features to help manage complexity and offload memorization. The fundamental task of understanding the code you’re writing is still yours. Stack Overflow and other forums are basically crowdsourced mentorship programs. Someone out there knows the thing you need to and rather than cultivate a wide social network you can take advantage of mass communication. To use it well you still need to know what’s happening, and if you don’t you can at least trust that the information is out there somewhere that you might be able to follow up on as needed. LLM assistants are designed to create output that looks plausible and to tell the user what they want to hear. If the user is an idiot the LLM will do nothing to make them recognize that they’re doing something wrong, much less help them fix it.
LLM are terrible because the data they were trained on is garbage, because companies don’t want to pay for people to create a curated dataset to produce acceptable results.
The tech itself can be good in specific cases. But the way it is shoved in everything right now is terrible
weren’t you also here having shitty opinions like a week ago?
e: yes
Looking at your history, keep on being edgy and contributing to the stereotype.
What stereotype? The stereotype that awful.systems posters are hostile to people who praise LLMs? Good.
god forbid any such posters think we want them pissing in our lounge
it can’t be that stupid, you must be training it wrong