Credit to the original artist.
Ironic. The translator and artist were the first ones to be killed, and now we got this bastardized AI “translation” that’s actually an entirely different image, but worse.
This is why so many were confused about “personal,” I believe it’s a popular borrowed term in Brazil that simply means personal trainer.
Not personnel, not HR, not personal assistant, nor an AI hallucination, even as some confidently claimed them, all because the original work was discarded for a shitty alternative, much like workers themselves.
This… Almost looks like the op of this post used AI to translate and change the art style of this comic.
Thank you for finding it. I will leave his IG here and in description, since my original post was just a copy, unfortunately.
I’m deeply moved
Seems the translated variant misses a big point of the original artist too, notice how the gun slowly comes into view? It’s trying to make a point that the replacement isn’t quite organic, but rather forced on us. Probably would have been better to just translate the text in place and include the rightful credit.
Automation and job replacement is a good thing. The reason it feels bad is because we’ve tied the ability to satisfy our basic needs to employment. In an economic model that actually isn’t a dystopian hellscape, robots replacing jobs is something to celebrate.
And to switch our economic model to one in which a person can thrive without pissing the vast majority of our lives away on the grind; we just need to pull ourselves up by our bootstraps!
This is so important.
An aspect of post scarcity is that people shouldn’t have to work. AGI might allow that; LLM is starting to fill some niches.
The problem is how it’s being done. Rather than benefiting society as a whole, it’s enriching a few. In an ideal world, people whose jobs are replaced should get a stipend. We should all be eagerly awaiting that time when our jobs are replaced and we get a paycheck - maybe a little reduced - but now we’re free to pursue our interests. If that means doing your old job, only now it’s bespoke, artisan work, great.
The other missing factors are free energy and limitless resources; but we’re making progress on energy, but resources are an issue with no solution on the horizon. Plus, we’re killing the planet by just existing, so there’s that.
We have a lot of problems to solve but AI is part of the solution, except that it’s being done wrong. And expensively.
Cooking is something that requires advanced robotics or some kind of heavily modular factory-like automated meal production line, not AI. Though AI certainly could assist in the development of such.
Drivers are being actively replaced right before our eyes.
A lot of Lawyer work is already being heavily automated, even without AI. Outside of that its “technically” replaceable with AI but on a literal legal level not likely currently possible. I think automating some aspects of being a lawyer might be beneficial but certain elements would be down right dystopian if fully automated.
Doctor work being automated is also already being done, but this is arguably a very good thing, as it maybe holds the key to a lot of medical breakthroughs and might unlock the potential to sort all that personal medical data people collect ever since that became a thing. And largely might help significantly reduce the cost of highly effective personal healthcare, given sufficient time.
Teacher work probably could be partially automated but getting kids to pay attention to a lesson, discipline, safety, etc would likely require a human to be around if only for liability.
modular factory-like automated meal production line, not AI.
Define AI… LLMs are just a part of that.
Yeah, Artificial Intelligence is pretty broad category of technologies, even so, robotics and automation is not AI. You could pair a robot or an automated factory with an AI of some kind, or use an AI to design them, and they’re related to each other in that they involve computer technology. Still, not the same thing.
A robotic arm in an car factory is a robot, but it doesn’t have AI in it, they’re usually given a set of commands to repeat.
A rube goldberg machine is technically automated once initialized. Its not AI.
I assume you, teaching as the profession that we have today is not at all safe.
What’s a personal?
Personal trainers are called simply personal in Brazil, and the original comic is in Brazilian Portuguese. This is an AI translated version.
Came here to say this… Personal?
This strip was made by AI, wasn’t it? WASN’T IT??!?!?!
It 100% is the new 4o image generation which appears very good in producing crisp panel comics with readable text exactly like this.
The most scary thing is all the people responding with denial, oblivious to this not being human made.
Any personals here?
As an automechanic, my job will never replace by AI, but instead we’re fucked by low wages and the black box automobile has slowly become.
Personal is a career?
The original meme this was copied from is Brazilian:
Probably a hallucination of the AI that generated this
I still think that all jobs are, in general, safe for the foreseeable future. But we will be expected to use AI tools and just produce more and more, so that a few people will gain more and more resources and power.
E.g. as engineers we will do less and less actual planning, but we will run AIs like it were a team of engineer slaves.
And I think this will be similar for other branches. A music composer will run AIs to compose parts of a song, adjust it, readjust other parts, till the song is good. I mean, afaik this is already how much of it works.
I believe that a few jobs will be hard hit. Things like first level phone customer support or service are probably going to be decimated, keeping humans for 2nd or 3rd level.
A similar thing happened with the irruption of the PC. In a few short years, the majority of professional typist jobs disappeared.
Entry level at most jobs will be hit. If you basically exist to do grunt work that somebody else assigns and will “approve” before going out, AI may replace you. I would not want to be a junior marketing communications person.
AI has sucked for years and that didn’t stop companies from trying to replace customer service with AI.
Everyone thinks their own line of work is safe because everyone knows the nuances of their own job. But the thing that gets you is that the easier a job gets the fewer people are needed and the more replaceable they are. You might not be able to make a robot cashier, but with the scan and go mobile app you only need an employee to wave a scanner (to check that some random items in your cart are included in the barcode on your receipt) and the time per customer to do that is fast enough that you only need one person, and since anyone can wave a scanner you don’t have much leverage to negotiate a raise.
This is the lump of labor fallacy. The error you are making is assuming that there is a fixed quantity of work that needs to be performed. When you multiply the productivity of every practitioner of a trade, they can lower their prices. This enables more people to afford those services. There’s a reason people don’t own just 2 or 3 sets of clothes anymore.
When you multiply the productivity of every practitioner of a trade, they can lower their prices.
I’m sorry, but that’s some hilarious Ayn Rand thinking. Prices didn’t go down in grocery stores that added self-checkout, they just made more profit. Companies these days are perfectly comfortable keeping the price the same (or raising them) and just cutting their overhead.
Don’t get me wrong, if there are things they could get more profit by selling more, then they likely would. But I think those items are few and far between. Everything else they just make more money with less workers.
Drivers were on the edge for a long time. Lawyers are on the edge for the past 2-3 years. Cooks are probably the closest ones to be on the edge too.
How drivers were on the edge?
Self driving cars have been threatened for years. Trucks are practically here (on private roads currently). The desire is strong.
Those images look nothing alike unless you stop looking beyond the contrasted regions… Which, fair enough, could indicate someone taking the outline of the original, but you hardly need AI to do that (Tracing is a thing that has existed for a while), and it’s certainly something human artists do as well both as practice, but also just as artistic reinterpretation (Re-using existing elements in different, transformative ways).
It’s hard to argue the contrast of an image would be subjective enough to be someone’s ownership, whether by copyright or by layman’s judgement. It easily meets the burden of significant enough transformation.
It’s easy to see why, because nobody would confuse it with the original. Assuming the original is the right, it looks way better and more coherent. If this person wanted to just steal from this Arcipello, they’re doing a pretty bad job.
EDIT: And I doubt anyone denies the existence of thieves, whether using AI or not. But this assertion that one piece can somehow make sweeping judgements about multi-faceted tech by this point at least hundreds of thousands if not millions of people are using, from hobbyist tinkerers to technical artists, is ridiculous.
AI can absolutely produce copyrighted content if it’s prompted to. Name drop an artist in Midjourney and you will be able to prompt their style - see this list of artists and prompted images. So you can just tweak the settings a bit to heavily weight their name, generally describe the composition of the work you’re looking to approximate, and you can absolutely produce something close to their original works.
The image is wrong because the original artwork is not stolen. It is part of a dataset by LAION (or another similar dataset, basically a text-image pair where the image is linked at its original source). To train the imagegen, its company had to download a temporary copy, which is exempt from infringement by copyright law. There is no original artwork somewhere in a database accessible by Midjourney, just the numerical relationship generated by the image-text pair it learned from.
On the other hand, AI can obviously produce content in violation of copyright - like here. But that’s specifically being prompted by the user. You can see other examples of this with Grok generating Mickey Mouse and Simpsons characters. As of right now, copyright violations are the legal responsibility of the users generating the content - not the AI itself.
I think you meant to respond to someone else, as I pretty much agree(d) with everything you’re saying and have not claimed otherwise. In fact in my very post I did say in more layman terms it was very likely this person used img2img or controlnet to copy the layout of the image, I think it’s less likely they got something this similar unguided, although it’s possible depending on the model or by somehow locking the prompt onto the original work.
But the one point I do disagree with is that this is a violation of copyright, as I explained before. For it to be a violation it would need to look substantially more similar to the original, the one consistent element between the two is the rough layout of the image (the contrasted areas), for the rest most of the content is very different. You notice the similarity of the contrasted area much more easily by it being sized down so much.
I hope you understand, as you seem to be more knowledgeable than the people that downvoted without leaving a comment, but you are allowed to use ideas and concepts from others without infringing on their work, as without it the creative industry literally couldn’t function. And yes, this is the responsibility on anyone using these models to avoid.
This person skirts too close in my eyes by pretty much 1:1 copying the layout, but it’s almost certainly still fine as again, a human doing this with an existing piece of work would also be (eg. the many replica’s / traces of the Mona Lisa).
Hell, if you take a look at the image in this very lemmy post, which was almost certainly taken from someone else, it has a much better case of copyright infringement, since it has the same layout, nearly identical people in the boxes, general message and concepts.
But in the end, copyright is different per jurisdiction and sometimes even between judges. Perhaps there is a case somewhere. It’s just (in my opinion) very unlikely to succeed based on the limited elements that are substantially similar.
EDIT: Added the section about the Mona Lisa replica’s for further clarification.
Hm yeah on second look the images aren’t as comparable as I expected. I just saw the general composition in the thumbnails and assumed more similarity. I do think they probably prompted the original artist in the generated work, though, which kind of led to my thoughts in my op.
Yeah that’s also fair enough conclusion, I think it’s a bit too convenient the rest of the image looks a lot worse (Much more clear signs of botched AI generation) while the layout remains pretty much exactly the same, which to me looks like selective generation.
Technically speaking it’s opposite than in the picture. The professions replaced by robots in the picture are in fact not replacable because they require emotional awareness. On the other hand professions in the picture that represent humans can be replaced by robots because they only require data.
Teachers and physicians do not require emotional awareness?
This is a mistake that many people will make, and it will be decades before they realize what they’ve done.
I teach elementary school. While most of the things I’m accountable for on paper are academic, most of my actual time is spent helping my students understand how to be functional humans. Problem-solving skills. Interpersonal skills. Self-control. Empathy. Self-esteem. In early grades, motor skills like how to hold a pencil or use scissors.
When we put a whole generation of kids in computerized AI schools (because it’s not really an “if” any more), we will see a huge effect in the real world, but probably not until after they graduate and have to start dealing with people in different work environments. And by then, we’ll be totally screwed.
Of course, the 1% will still have their kids in real schools with real teachers, because they already know that the very products they tout to the masses are actually detrimental to child development.
It’s because their parents need to go to work.
AI bad
Yes, yes it is.
Not sure if I’d agree here. I think that used properly, AI definitely has great use-cases, especially in areas of science, like medicine.
As with any new “invention”, there is the tech-bros that jump at it first chance they get and try to push it into anything. We had that with blockchain, we had that with crypto, we had it with web3 and now we have it with AI.
The tech isn’t bad at all, it’s actually extremely useful, but the use-cases it’s put to work at aren’t.
Luddite
The luddites were unironically entirely correct and capitalist disenfranchisement of capital has made the world objectively worse despite the wealth it brought to 0.001% of the population.
Its cute you have your own call out forum for people that disagree with your neoliberal generic beliefs and all; one that only you post to or really participate in bar a few lost /all viewers, but that’s not an argument.
People being upset that their livelihoods are being destroyed while their previous bosses become immeasurably richer while doing even less work were objectively on the right side of history given where it has lead us-- with the greatest wealth disparity in all of known human history, and the most people food and shelter insecure in all of human history.
Eh. It’s more like popular history remembers the bullet points of their ideals and not the reality.
What’s stupid is thinking LLMs are AI.
Can you please stop misusing words?
Can you please read a dictionary?
I have checked one before making the comment
Could you please give me your definition of the word?
Check again. I heard that reading out loud, word by word also helps some people.
There are multiple dictionaries with different definitions. Could you please give me your definition?
Amusingly, cook is probably the safest of those positions for the time being. The physicality and necessity of presence makes it harder to automate. Lawyer, doctor, and teacher can be done remotely, and is based largely on knowledge, so they are prime targets. People are already trying it. Drivers you could see being done remotely if we had faster, more ubiquitous, net connections, so it’s doable as well. It’s basically already happening. But cooking… AI doesn’t seem like it would give you the right kind of inputs and outputs to do that any easier/faster/cheaper. It’s already possible to make a food vending machine. The limitations of vending machines aren’t really that they need an easier interface on their database. AI won’t really help there. And to go beyond that and try to make an AI powered restaurant probably wouldn’t be profitable. It’s barely profitable to run a regular restaurant most of the time. If you try to put in the probable millions to automate a restaurant, it’d probably go the same way as the self-checkout lanes at stores, which is to say poorly.
Actually have all of the jobs I would think the safest are doctors and lawyers. When your life and liberty are on the line you really don’t want an emotionless machine you want a human.
Years ago I had to have surgery on my neck to remove a benign tumor, and I absolutely wasn’t worried, I was definitely worried it would hurt but I wasn’t worried it would go wrong and I’d end up getting a major artery cut, because I trusted the person doing it, because they came and talked to me. I wouldn’t absolutely not trust a robot to do surgery, even if logically the robot would probably be better than the human.
It depends on the type of doctor and lawyer’s service. Some will remain with humans. Some will be a welcome free-up of their time to focus on the more unusual (not solvable by regressing to the mean) cases. There are many doctor’s appointments that boil down to ‘You have the flu. Here’s a beg off note for your shitty boss. Go back to bed.’ And there are many attorneys’ consultations that boil down to ‘I have taken down what you want to say, and now I will translate it into legalese.’
As for the trust, that comes from expectations. You trust a human because human surgeons are the norm. You don’t have buddies who had a robot remove their appendix. If the AI is competent, eventually that would be as normal to a patient as buying something from a vending machine.
However, I suspect surgery in particular is another of those things where it’ll take an absolute mountain of training data and a lot of risk of human health/life to even attempt, so it’s a long way off compared to the simple ‘GP writes a referral’ stuff.