Lots of people on Lemmy really dislike AI’s current implementations and use cases.
I’m trying to understand what people would want to be happening right now.
Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?
Thanks for the discourse. Please keep it civil, but happy to be your punching bag.
If we’re going pie in the sky I would want to see any models built on work they didn’t obtain permission for to be shut down.
Failing that, any models built on stolen work should be released to the public for free.
This is the best solution. Also, any use of AI should have to be stated and watermarked. If they used someone’s art, that artist has to be listed as a contributor and you have to get permission. Just like they do for every film, they have to give credit. This includes music, voice and visual art. I don’t care if they learned it from 10,000 people, list them.
If we’re going pie in the sky I would want to see any models built on work they didn’t obtain permission for to be shut down.
I’m going to ask the tough question: Why?
Search engines work because they can download and store everyone’s copyrighted works without permission. If you take away that ability, we’d all lose the ability to search the Internet.
Copyright law lets you download whatever TF you want. It isn’t until you distribute said copyrighted material that you violate copyright law.
Before generative AI, Google screwed around internally with all those copyrighted works in dozens of different ways. They never asked permission from any of those copyright holders.
Why is that OK but doing the same with generative AI is not? I mean, really think about it! I’m not being ridiculous here, this is a serious distinction.
If OpenAI did all the same downloading of copyrighted content as Google and screwed around with it internally to train AI then never released a service to the public would that be different?
If I’m an artist that makes paintings and someone pays me to copy someone else’s copyrighted work. That’s on me to make sure I don’t do that. It’s not really the problem of the person that hired me to do it unless they distribute the work.
However, if I use a copier to copy a book then start selling or giving away those copies that’s my problem: I would’ve violated copyright law. However, is it Xerox’s problem? Did they do anything wrong by making a device that can copy books?
If you believe that it’s not Xerox’s problem then you’re on the side of the AI companies. Because those companies that make LLMs available to the public aren’t actually distributing copyrighted works. They are, however, providing a tool that can do that (sort of). Just like a copier.
If you paid someone to study a million books and write a novel in the style of some other author you have not violated any law. The same is true if you hire an artist to copy another artist’s style. So why is it illegal if an AI does it? Why is it wrong?
My argument is that there’s absolutely nothing illegal about it. They’re clearly not distributing copyrighted works. Not intentionally, anyway. That’s on the user. If someone constructs a prompt with the intention of copying something as closely as possible… To me, that is no different than walking up to a copier with a book. You’re using a general-purpose tool specifically to do something that’s potentially illegal.
So the real question is this: Do we treat generative AI like a copier or do we treat it like an artist?
If you’re just angry that AI is taking people’s jobs say that! Don’t beat around the bush with nonsense arguments about using works without permission… Because that’s how search engines (and many other things) work. When it comes to using copyrighted works, not everything requires consent.
Search engines work because they can download and store everyone’s copyrighted works without permission. If you take away that ability, we’d all lose the ability to search the Internet.
No they don’t. They index the content of the page and score its relevance and reliability, and still provide the end user with the actual original information
However, if I use a copier to copy a book then start selling or giving away those copies that’s my problem: I would’ve violated copyright law. However, is it Xerox’s problem? Did they do anything wrong by making a device that can copy books?
This is false equivalence
LLMs do not wholesale reproduce an original work in it’s original form, they make it easy to mass produce a slightly altered form without any way to identify the original attribution.
If you paid someone to study a million books and write a novel in the style of some other author you have not violated any law. The same is true if you hire an artist to copy another artist’s style. So why is it illegal if an AI does it? Why is it wrong?
I think this is intentionally missing the point.
LLMs don’t actually think, or produce original ideas. If the human artist produces a work that too closely resembles a copyrighted work, then they will be subject to those laws. LLMs are not capable of producing new works, by definition they are 100% derivative. But their methods in doing so intentionally obfuscate attribution and allow anyone to flood a space with works that require actual humans to identify the copyright violations.
They have to pay for every copyrighted material used in the entire models whenever the AI is queried.
They are only allowed to use data that people opt into providing.
There’s no way that’s even feasible. Instead, AI models trained on pubically available data should be considered part of the public domain. So, any images that anyone can go and look at without a barrier in the way, would be fair game, but the model would be owned by the public.
What about models folks run at home?
Careful, that might require a nuanced discussion that reveals the inherent evil of capitalism and neoliberalism. Better off just ensuring that wealthy corporations can monopolize the technology and abuse artists by paying them next-to-nothing for their stolen work rather than nothing at all.
Magic wish granted? Everyone gains enough patience to leave it to research until it can be used safely and sensibly. It was fine when it was an abstract concept being researched by CS academics. It only became a problem when it all went public and got tangled in VC money.
Like a lot of others, my biggest gripe is the accepted copyright violation for the wealthy. They should have to license data (text, images, video, audio,) for their models, or use material in the public domain. With that in mind, in return I’d love to see pushes to drastically reduce the duration of copyright. My goal is less about destroying generative AI, as annoying as it is, and more about leveraging the money being it to change copyright law.
I don’t love the environmental effects but I think the carbon output of OpenAI is probably less than TikTok, and no one cares about that because they enjoy TikTok more. The energy issue is honestly a bigger problem than AI. And while I understand and appreciate people worried about throwing more weight on the scales, I’m not sure it’s enough to really matter. I think we need bigger “what if” scenarios to handle that.
There’s too many solid reasons to be upset with, well, not AI per say, but the companies that implement, market, and control the AI ecosystem and conversation to go into in a single post. Sufficient to say I think AI is an existential threat to humanity mainly because of who’s controlling it and who’s not.
We have no regulation on AI, we have no respect for artists, writers, musicians, actors, and workers in general coming from these AI peddling companies, we only see more and more surveillance and control over multiple aspects of our lives being consolidated around these AI companies and even worse, we get nothing more in exchange except for the promise of increased productivity and quality, and that increase in productivity and quality is a lie. AI currently gives you the wrong answer or some half truth or some abomination of someone else’s artwork really really fast…that is all it does, at least for the public sector currently.
For the private sector at best it alienates people as chatbots, and at worst is being utilized to infer data for surveillance of people. The tools of technology at large are being used to suppress and obfuscate speech by whoever uses it, and AI is one tool amongst many at the disposal of these tech giants.
AI is exacerbating a knowledge crisis that was already in full swing as both educators and students become less curious about subjects that don’t inherently relate to making profits or consolidating power. And because knowledge is seen as solely a way to gather more resources/power and survive in an ever increasingly hostile socioeconomic climate, people will always reach for the lowest hanging fruit to get to that goal, rather than actually knowing how to solve a problem that hasn’t been solved before or inherently understand a problem that has been solved before or just know something relatively useless because it’s interesting to them.
There’s too many good reasons AI is fucking shit up, and in all honesty what people in general tote about AI is definitely just a hype cycle that will not end well for the majority of us and at the very least, we should be upset and angry about it.
Here are further resources if you didn’t get enough ranting.
I’d like for it to be forgotten, because it’s not AI.
Thank you.
It has to come from the C suite to be “AI”. Otherwise it’s just sparkling ML.
Stop selling it a loss.
When each ugly picture costs $1.75, and every needless summary or expansion costs 59 cents, nobody’s going to want it.
Training data needs to be 100% traceable and licensed appropriately.
Energy usage involved in training and running the model needs to be 100% traceable and some minimum % of renewable (if not 100%).
Any model whose training includes data in the public domain should itself become public domain.
And while we’re at it we should look into deliberately taking more time at lower clock speeds to try to reduce or eliminate the water usage gone to cooling these facilities.
I want people to figure out how to think for themselves and create for themselves without leaning on a glorified Markov chain. That’s what I want.
AI people always want to ignore the environmental damage as well…
Like all that electricity and water are just super abundant things humans have plenty of.
Everytime some idiot asks AI instead of googling it themselves the planet gets a little more fucked
Are you not aware that Google also runs on giant data centers that eat enormous amounts of power too?
Multiple things can be bad at the same time, they don’t all need to be listed every time any one bad thing is mentioned.I wasn’t listing other bad things, this is not a whataboutism, this was a specific criticism of telling people not to use one thing because it uses a ton of power/water when the thing they’re telling people to use instead also uses a ton of power/water.
Yeah, you’re right. I think I misread your/their comment initially or something. Sorry about that.
And ai is in search engines now too, so even if asking chatfuckinggpt uses more water than google searching something used to, google now has its own additional fresh water resource depletor to insert unwanted ai into whatever you look up.
We’re fucked.
Fair enough.
Yeah, the intergration of AI with chat will just make it eat even more power, of course.
This is like saying a giant truck is the same as a civic for a 2 hr commute …
Per: https://www.rwdigital.ca/blog/how-much-energy-do-google-search-and-chatgpt-use/
Google search currently uses 1.05GWh/day. ChatGPT currently uses 621.4MWh/day
The per-entry cost for google is about 10% of what it is for GPT but it gets used quite a lot more. So for one user ‘just use google’ is fine, but since are making proscriptions for all of society here we should consider that there are ~300 million cars in the US, even if they were all honda civics they would still burn a shitload of gas and create a shitload of fossil fuel emissions. All I’m saying if the goal is to reduce emissions we should look at the big picture, which will let you understand that taking the bus will do you a lot better than trading in your F-150 for a Civic.
Google search currently uses 1.05GWh/day. ChatGPT currently uses 621.4MWh/day
…
And oranges are orange
It doesn’t matter what the totals are when people are talking about one or the other for a single use.
Less people commute to work on private jets than buses, are you gonna say jets are fine and buses are the issue?
Because that’s where your logic ends up
People haven’t ”thought for themselves” since the printing press was invented. You gotta be more specific than that.
Ah, yes, the 14th century. That renowned period of independent critical thought and mainstream creativity. All downhill from there, I tell you.
Independent thought? All relevant thought is highly dependent of other people and their thoughts.
That’s exactly why I bring this up. Having systems that teach people to think in a similar way enable us to build complex stuff and have a modern society.
That’s why it’s really weird to hear this ”people should think for themselves” criticism of AI. It’s a similar justification to antivaxxers saying you ”should do your own research”.
Surely there are better reasons to oppose AI?
The usage of “independent thought” has never been “independent of all outside influence”, it has simply meant going through the process of reasoning–thinking through a chain of logic–instead of accepting and regurgitating the conclusions of others without any of one’s own reasoning. It’s a similar lay meaning as being an independent adult. We all rely on others in some way, but an independent adult can usually accomplish activities of daily living through their own actions.
I agree on the sentiment, it was just a weird turn of phrase.
Social media has done a lot to temper my techno-optimism about free distribution of information, but I’m still not ready to flag the printing press as the decay of free-thinking.
Things are weirder than they seem on the surface.
A math professor collegue of mine calls extremely restrictive use of language ”rigor”, for example.
The point isn’t that it’s restrictive, the point is that words have precise technical meanings that are the same across authors, speakers, and time. It’s rigorous because of that precision and consistency, not just because it’s restrictive. It’s necessary to be rigorous with use of language in scientific fields where clear communication is difficult but important to get right due to the complexity of the ideas at play.
So your argument against AI is that it’s making us dumb? Just like people have claimed about every technology since the invention of writing? The essence of the human experience is change, we invent new tools and then those tools change how we interact with the world, that’s how it’s always been, but there have always been people saying the internet is making us dumb, or the TV, or books, or whatever.
Get back to me after you have a few dozen conversations with people who openly say “Well I asked ChatGPT and it said…” without providing any actual input of their own.
Oh, you mean like people have been saying about books for 500+ years?
Part of what makes me so annoyed is that there’s no realistic scenario I can think of that would feel like a good outcome.
Emphasis on realistic, before anyone describes some insane turn of events.
My biggest issue with AI is that I think it’s going to allow a massive wealth transfer from laborers to capital owners.
I think AI will allow many jobs to become easier and more productive, and even eliminate some jobs. I don’t think this is a bad thing - that’s what technology is. It should be a good thing, in fact, because it will increase the overall productivity of society. The problem is generally when you have a situation where new technology increases worker productivity, most of the benefits of that go to capital owners rather than said workers, even when their work contributed to the technological improvements either directly or indirectly.
What’s worse, in the case of AI specifically it’s functionality relies on it being trained on enormous amounts of content that was not produced by the owners of the AI. AI companies are in a sense harvesting society’s collective knowledge for free to sell it back to us.
IMO AI development should continue, but be owned collectively and developed in a way that genuinely benefits society. Not sure exactly what that would look like. Maybe a sort of light universal basic income where all citizens own stock in publicly run companies that provide AI and receive dividends. Or profits are used for social services. Or maybe it provides AI services for free but is publicly run and fulfills prosocial goals. But I definitely don’t think it’s something that should be primarily driven by private, for-profit companies.
It’s always kinda shocking to me when the detractor talking points match the AI corpo hype blow by blow.
I need to see a lot more evidence of jobs becoming easier, more productive or entirely redundant.
I want real, legally-binding regulation, that’s completely agnostic about the size of the company. OpenAI, for example, needs to be regulated with the same intensity as a much smaller company. And OpenAI should have no say in how they are regulated.
I want transparent and regular reporting on energy consumption by any AI company, including where they get their energy and how much they pay for it.
Before any model is released to the public, I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.
Every step of any deductive process needs to be citable and traceable.
Before any model is released to the public, I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.
Their creators can’t even keep them from deliberately lying.
Exactly.
Clear reporting should include not just the incremental environmental cost of each query, but also a statement of the invested cost in the underlying training.
… I want clear evidence that the LLM … will never hallucinate or make something up.
Nothing else you listed matters: That one reduces to “Ban all Generative AI”. Actually worse than that, it’s “Ban all machine learning models”.
Ideally the whole house of cards crumbles and AI goes the way of 3D TV’s, for now. The world as it is now is not ready for AGI. We would quickly end up in a " I have no mouth and I must scream" scenario.
Otherwise, what everyone else has posted are good starting points. I would just add that any data centers used for AI have to be powered 100% by renewable energy.
If we’re talking realm of pure fantasy: destroy it.
I want you to understand this is not AI sentiment as a whole, I understand why the idea is appealing, how it could be useful, and in some ways may seem inevitable.
But a lot of sci-fi doesn’t really address the run up to AI, in fact a lot of it just kind of assumes there’ll be an awakening one day. What we have right now is an unholy, squawking abomination that has been marketed to nefarious ends and never should have been trusted as far as it has. Think real hard about how corporations are pushing the development and not academia.
Put it out of its misery.
How do you “destroy it”? I mean, you can download an open source model to your computer right now in like five minutes. It’s not Skynet, you can’t just physically blow it up.
OP asked what people wanted to happen, and even later “destroy gen AI” as an option. I get it is not realistically feasible, but it’s certainly within the realm of options provided for the discussion. No need to police their pie in the sky dream. I’m sure they realize it’s not realistic.
Honestly, at this point I’d settle for just “AI cannot be bundled with anything else.”
Neither my cell phone nor TV nor thermostat should ever have a built-in LLM “feature” that sends data to an unknown black box on somebody else’s server.
(I’m all down for killing with fire and debt any model built on stolen inputs,.too. OpenAI should be put in a hole so deep that they’re neighbors with Napster.)