AI is going to destroy art the same way Photoshop, or photography, or pre-made tubes of paints, destroyed art. It’s a tool, it helps people take the idea in their head and put it in the world. And it lowers the barrier to entry, now you don’t need years of practice in drawing technique to bring your ideas to life, you just need ideas.
If AI gets to a point that it can give us creative, original, art that sparks emotion in novel ways…well we probably also made a super intelligent AI and our list of problems is much different than today.
As someone who’s absolutely terrible at drawing, but enjoys photography and generally creativity, having AI tools to generate my own art is opening up a whole different avenue for me to scratch my creative itch.
I’ve got a technical background, so figuring out the tools and modifying them for my purposes has been a lot more fun than practice drawing.This is the perfect use case.
Photoshop didn’t destroy jobs forever, all it did was shift how people worked AND actually created work and different types of work.
I’ve only dabbled a bit with ML art, and I am by no means an artist, but it doesn’t scratch that itch for me the same way that drawing or doing stuff in blender does. It doesn’t really feel like I’m watching my vision slowly take shape, no matter how precise I make the prompt. It kinda just feels like what it is, a transformer iterating over some random noise.
I’m also a very technical person, and for years I was stuck in that same mindset of “I’m a technical guy, I’m not cut out for art”. I was only able to get out of this slump thanks to some of my art friends, who were really helpful in pointing me in the right direction.
Learning to draw isn’t the easiest thing in the world, and trust me I’m probably as bad at it as you are, but it’s fun, and it feels satisfying.
I agree that AI has a place as another artistic medium, but I also feel like it can become a trap for people like me who think they don’t have an artistic bone in their body.
If you do feel like getting back into drawing, then as a fellow technical person I’d recommend learning blender first. It taught me some of the skills I also use in drawing, like perspective, shading, and splitting complex objects into simpler shapes. It’s also just plain fun.
I think the way I use AI is fundamentally different from how most people draw. For me it’s much more like I’m exploring what’s possible, while making creative decisions on the direction to explore. I don’t start with anything in particular in mind. In a lot of ways it helps with the choice paralysis I get when faced with completely open-ended things like art.
As someone who’s absolutely terrible at drawing
Then practice. Nearly no artist was born knowing how to draw or paint, we dedicated countless hours to learn what works and what doesn’t.
As a musician, I couldn’t agree more. Talent really helps with initial aptitude, but will peter out when challenged. That’s when real skill development begins. Time and investment connecting you to your craft until there’s nothing in the world between the two, that’s self actualization.
But that’s not fun for them. You get really good at things you like to do.
It feels like you didn’t read the 2nd half of their comment. They do practice. They have a creative side that they want to explore, but they don’t enjoy that sort of grind. Instead, they like tinkering and combining tools in interesting ways. I don’t think this is a bad thing.
Leo Fender didn’t play guitar and always wished that he’d sit down and devoted the time, but never actually enjoyed it. But to say that Leo didn’t contribute to the music world, would be insane.
i like the idea of AI as a tool artists can use, but that’s not a capitalist’s viewpoint, unfortunately. they will try to replace people.
And if text-based images remain uninspired and samey… oh well? Congratulations, you will foreverafter be able to spot when someone’s extremely timely gag image was cranked out via its description, rather than badly composited from Google Images results. I’ve done a lot of bad compositing for Something Awful shitpost threads and speed beats effort every time.
This. AI was never made for the sole purpose of creating art or beating humans in chess. Doing so are just side quests for the real stuff.
What do you think the “real stuff” is?
Some people also doesn’t care if there is a Rembrandt or a Picasso or an AI but like to dabble in the arts anyways because it’s something they like to do.
It’s fulfilling (I do love Renoir though).
Tbh I hate Photoshop for a lot of photography. It is unfortunately necessary for macro photography, which is the only type I do. Which is one of the reasons mine is not nearly as good as it could be because I refuse to use it.
I hate this sentiment. It’s not a tool like a brush is to a canvas. It’s a machine that runs off the fuel of our creative achievements. The sheer amount of pro AI shit I read from this place just makes me that closer to putting a bullet in my fucking skull
Once you reincarnate in the future, generative models will make even better art than they do today. It’ll be a losing battle against time.
Shill
Luddite
Removed by mod
Downvoted for truth. Too bad for them this isn’t reddit
Tech bros are not really techie themselves as they are really just Wall Street bros with tech as their product. Most claim they can code, but if they were coders they would be coding. They are not coders, they are businessmen through and through.who just happen to sell tech.
This is 100% correct. It can overlap but honestly as someone going into embedded systems I despise tech bros.
Most claim they can code, but if they were coders they would be coding
I dislike techbros as much as you, but this isn’t really a valid statement.
I can code, but I can’t sell a crypto scam to millions of rubes.
If I could, why would I waste my time writing code?
Many techbros are likely “good enough” coders who have better marketing skills and used their tech knowledge to leverage into business instead.
That is the thing though. The real talented tech people tend to be more in the weeds of the tech and get great enjoyment from that. The “tech bros” are more into groups, people, social structures, manipulation, controlling and such and would go crossed eyed if they really had to code something complex as they could never sit that long and concentrate. These are not these same people. Tech bros want you to think they are tech gurus as that is their brand, but it is a lie.
99% of people in tech leadership are just regurgitating marketing jargon with minimal understanding of the underlying tech.
I think approximation is the right word here. It’s pretty cool and all and I’m looking forward how it will develop. But it’s mostly a fun toy.
I’m stoked for the moment the tech bros understand, that an AI is way better at doing their job than it is at creating art.
tech bros jobs is to wrote bad javascript and fall for scam, this AI already beaten
I think one thing you and many other people misunderstand is that the image generation aspect of AI is a sideshow, both in use and in intent.
The ability to generate images from text based prompts is basically a side effect of the ability that they are actually spending billions on, which is object detection.
It’s bad at anything useful for programming too.
And the things it’s good at have been developed by stealing GPL/copyleft code.
So you’re happy to see AI take someone else’s job as long as it isn’t taking your job.
Taking the jobs of the people responsible for creating it seems preferable to taking others’ jobs.
Less work being done by anyone is better. Thinking it’s bad that work is done for us by robots is the brain worms talking.
Indeed. Ideally AI would do every job, so that humans can focus on just doing what we want to do. It’d be like the whole species getting to retire.
You’d rather cheer for people to lose their jobs without anyone calling you out on it, sure.
I’m not the angry one wishing unemployment on my “enemies” here.
Who are you?
What do you want?
The ideal endpoint is to eliminate the concept of “jobs” entirely. Why should people have to work?
Okay. So why are you breaking that guy’s balls, over automating away jobs, which you don’t want to exist?
Because currently we do need jobs. Otherwise why is he upset about AI in the first place?
deleted by creator
That comment was very Reddit of you. Don’t do that, please.
You’d rather cheer for people to lose their jobs without anyone calling you out on it, sure.
Keep assuming. Fuel your own rage. I tried. Now I’m out. Good night and goodbye.
You said tech bros will realize it’s easier to replace their jobs than those of creatives. Who is included in “tech bros” here? I wanted a job in tech and can’t get one partly because of AI. Am I a tech bro? I would be very careful what you imply here.
All three of you are insufferable
I am insufferable for wanting a job? I am not the one inventing these AIs. Nor am I the one firing people because they exist.
When people talk about “tech bros” without clarifying who they mean I can only imagine they are including people like me.
I’m not the angry one wishing unemployment on my “enemies” here.
I think they’re using AI to say the same sentence over and over again.
He’s saying the same thing because he’s not actually getting a proper response. The other guy just keeps saying shit like “That’s very reddit of you” or some shit after possibly threatening his job.
I work in AI. LLM’s are cool and all, but I think it’s all mostly hype at this stage. While some jobs will be lost (voice work, content creation) my true belief is that we’ll see two increases:
-
The release of productivity tools that use LLM’s to help automate or guide menial tasks.
-
The failure of businesses that try to replicate skilled labour using AI.
In order to stop point two, I would love to see people and lawmakers really crack down on AI replacing jobs, and regulating the process of replacing job roles with AI until they can sufficiently replace a person. If, for example, someone cracks self-driving vehicles then it should be the responsibility of owning companies and the government to provide training and compensation to allow everyone being “replaced” to find new work. This isn’t just to stop people from suffering, but to stop the idiot companies that’ll sack their entire HR department, automate it via AI, and then get sued into oblivion because it discriminated against someone.
I’ve also heard it’s true that as far as we can figure, we’ve basically reached the limit on certain aspects of LLMs already. Basically, LLMs need a FUCK ton of data to be good. And we’ve already pumped them full of the entire internet so all we can do now is marginally improve these algorithms that we barely understand how they work. Think about that, the entire Internet isnt enough to successfully train LLMs.
LLMs have taken some jobs already (like audio transcription, basic copyediting, and aspects of programming), we’re just waiting for the industries to catch up. But we’ll need to wait for a paradigm shift before they start producing pictures and books or doing complex technical jobs with few enough hallucinations that we can successfully replace people.
The (really, really, really) big problem with the internet is that so much of it is garbage data. The number of false and misleading claims spread endlessly on the internet is huge. To rule those beliefs out of the data set, you need something that can grasp the nuances of published, peer-reviewed data that is deliberately misleading propaganda, and fringe conspiracy nuts that believe the Earth is controlled by lizards with planes, and only a spritz bottle full of vinegar can defeat them, and everything in between.
There is no person, book, journal, website, newspaper, university, or government that has reliably produced good, consistent help on questions of science, religion, popular lies, unpopular truths, programming, human behavior, economic models, and many, many other things that continuously have an influence on our understanding of the world.
We can’t build an LLM that won’t consistently be wrong until we can stop being consistently wrong.
Yeah I’ve heard medical LLMs are promising when they’ve been trained exclusively on medical texts. Same with the ai that’s been trained exclusively on DNA etc.
My own personal belief is very close to what you’ve said. It’s a technology that isn’t new, but had been assumed to not be as good as compositional models because it would cost a fuck-ton to build and would result in dangerous hallucinations. It turns out that both are still true, but people don’t particularly care. I also believe that one of the reasons why ChatGPT has performed so well compared to other LLM initiatives is because there is a huge amount of stolen data that would get OpenAI in a LOT of trouble.
IMO, the real breakthroughs will be in academia. Now that LLM’s are popular again, we’ll see more research into how they can be better utilised.
Afaik open ai got their training data from basically a free resource that they just had to request access to. They didn’t think much about it along with everyone else. No one could have predicted that it would be that valuable until after the fact where in retrospect it seems obvious.
Nah fuck HR, they’re the shield of the companies to discriminate withing margins from behind
I think the proper route is a labor replacement tax to fund retraining and replacement pensions
I sincerely doubt AI voice over will out perform human actors in the next 100 years in any metric, including cost or time savings.
Not sure why you’re downvoted, but this is already happening. There was a story a few days ago of a long-time BBC voice-over artist that lost their gig. There have also been several stories of VA workers being handed contracts that allow the reuse of their voice for AI purposes.
The artist you’re referring to is Sara Poyzer - https://m.imdb.com/name/nm1528342/ - she was replaced in one specific way:
The BBC is making a documentary about someone (as yet unknown), who is dying and has lost the ability to speak. Poyzer was on pencil (like standby, hold the date - but not confirmed).to narrate the dying person’s words. Instead they contracted an AI agency to use AI to mimic the dying persons voice (from when they could still speak).
It would likely be cheaper and easier to hire an impressionist, or Ms Poyzer herself but I assume they are doing it for the “novelty” value, and with the blessing of the terminally ill person.
For that reason I think my point still stands, they have made the work harder and more expensive, and created a negative PR storm - all problems created by AI and not solved by.
You are incorrect that AI voice contracts are common place, as SAG negotiated that use of AI voice tools is to be compensated as if the actor recorded the lines themselves - which most actors do from home nowadays, so again it’s at best the same cost for an inferior product - but actually more expensive because you were paying just the actor, but now you’re paying the actor AND the AI techs.
edit: and not just that, AI voice products are bad. Yes, you can maybe fudge the uncanny Valley a bit by sculpting the prompts and the script to edge towards short sentences, delivered in a monotone, narrating an emotionless description without caring about stress patterns or emphasis, meter, inflection or caesura, and without any breathing sounds (sometimes a positive sometimes a negative) - but that’s all in an actors wheelhouse for free.
UBI is better and has more momentum with the general public
Are you saying that if a company adopts AI to replace a job, they should have to help the replaced workers find new work? Sounds like something one can loophole by cutting the department for totally unrelated reasons before coincidentally realizing that they can have AI do that work, which they totally didn’t think of before firing people.
That’s why it would need regulation to work…
I would love to see people and lawmakers really crack down on AI replacing jobs
Why stop there, let’s crack down on electricity replacing jobs!
-
There are plenty of things you can shit on AI art for
But it is neither badly approximately, nor can a student produce such work in less than a minute.
This feels like the other end of the extreme of the tech bros
To me, this feels similar to when photography became a thing.
Realism paintings took a dive. Did photos capture realism? Yes. Did it take the same amount of time and training? Hell no.
I think it will come down to what the specific consumer wants. If you want fast, you use AI. If you want the human-made aspect, you go with a manual artist. Do you prefer fast turnover, or do you prefer sentiment and effort? Do you prefer pieces from people who master their craft, or from AI?
I’m not even sorry about this. They are not the exact same, and I’m sick of people saying that AI are and handcrafted art are the exact same. Even if you argue that it takes time to finesse prompts, I can practically promise you that the amount of time between being able to create the two art methods will be drastic. Both may have their place, but they will never be the exact same.
It’s the difference between a hand-knitted sweater from someone who had done it their entire life to a sweater from Walmart. It’s a hand crafted table from an expert vs something you get from ikea.
Yes, both fill the boxes, but they are still not the exact same product. They each have their place.
On the other hand, I won’t commend the hours required to master the method as if they’re the same. AI also usually doesn’t have to factor in materials, training, hourly rate, etc.
deleted by creator
Is English your second language?
Or was this comment by an AI?
Which, mine or theirs?
Shampoo_bottle
Is it that obvious?
No, actually not at all.
I only ask because if English is your second language then your repetition with “other end of the extreme of the tech bros” makes sense. Your mistake is one that many English-as-first-language writers make.
That’s all, I didn’t mean to make you feel self-conscious.
That is perfectly valid English. You can use the word “the” twice in a sentence.
Of the of the
Art itself isn’t useless it’s just incredibly replicable. There is so much good art out there that people don’t need to consume crap.
It’s like saying there is no money in being a footballer. Of course there is loads of money in being a footballer. But most people that play football don’t make any money.
This is a good analogy
Pretty sure whoever wrote the meme is talking about essay writing in Arts/Humanities, (not the disciplines where you draw and paint etc which are Fine Arts and are not Faculty of Arts in an academic context.
Billions were spent inventing and producing the calculator device.
Human calculators are now extinct.
Complex calculations are far more accessible.
This has a secondary effect of making average people incapable of estimation in their heads. Hopefully in the future people won’t be incapable of writing and art.
Average people weren’t doing complex math in their head back when human calculators were a thing.
But they were estimating things. Somehow illiterate people ran marketplaces for thousands of years.
The entire point behind the much maligned New Math is to teach approximate solutions that you can do quickly in your head. It’s the realization that if you want an exact answer, use a calculator, but quick head estimates are still useful.
It was opposed by generations who were told to memorize multiplication tables because they wouldn’t always have a calculator available.
Well you should memorize those anyway. It’s useful all your life for easy calculation. If you want 7 items and they cost $3.50 each, it’s between $21 and $28.
I check on the calculator I have with me at all times. It’s $24.50
I calculated it in my head without memorising all the multiplication tables. I just realised that 7*3.5 is equal to (7*5+7*2)/2. And that 49/2 is equal to 40/2+9/2. Easy peasy. This is why I failed second grade math, because multiplication tables are only useful for doing operations a few seconds faster.
There’s a much easier way.
7x3.5 is the same as 7x3 plus half of 7. That’s 21 plus 3.5 is 24.5
The funny thing is you did this for the division when you could do it for the entire thing.
Yeah but that doesn’t work when you need it most on “The Price is Right”.
Turing Incompleteness is a pathway to many powers the Computer Scientists would consider incalculable.
Is it possible to learn this power?
No, but it’s extremely possible to copy someone else’s work on it from stack overflow!
Not from an algorithm.
In fact, there’s infinite problems that cannot be solved by Turing machnes!
(There are countably many Turing-computable problems and uncountably many non-Turing-computable problems)
Infinite seems like it’s low-balling it, then. 0% of problems can be solved by Turing machines (same way 0% of real numbers are integers)
Infinite seems like it’s low-balling it
Infinite by definition cannot be “low-balling”.
0% of problems can be solved by Turing machines (same way 0% of real numbers are integers)
This is incorrect. Any computable problem can be solved by a Turing machine. You can look at the Church-Turing thesis if you want to learn more.
Infinite by definition cannot be “low-balling”.
I was being cheeky! It could’ve been that the set of non-Turing-computible problems had measure zero but still infinite cardinality. However there’s the much stronger result that the set of Turing-computible problems actually has measure zero (for which I used 0% and the integer:reals thing as shorthands because I didn’t want to talk measure theory on Lemmy). This is so weird, I never got downvoted for this stuff on Reddit.
Oh, sorry about that! Your cheekiness went right over my head. 😋
The subset of integers in the set of reals is non-zero. Sure, I guess you could represent it as arbitrarily small small as a ratio, but it has zero as an asymptote, not as an equivalent value.
The cardinality is obviously non-zero but it has measure zero. Probability is about measures.
Except they have convinced themselves that if it can’t be calculated it’s worthless.
deleted by creator
I just love the idjits who think not showing empathy to people AI bros are trying to put out of work will save them when the algorithms come for their jobs next
When LeopardsEatingFaces becomes your economic philosophy
The gutting of the humanities and other things generally written off as “frivolous” kind of terrified me. There’s something that feels distinctly wrong about these attempts at destroying and anyone that even might turn an introspective gaze on society itself. Like they don’t want anything that might foster self-awareness accessible to the layman.
Honestly people are trying to desperately to automate physical labor to. The problem is the machines don’t understand the context of their work which can cause problems. All the work of AI is a result of trying to make a machine that can. The art and humanities is more a side project
The art and humanities is more a side project
I’ll add:
A side project that isn’t a life or death situation like most of those physical labor things you’re talking about. Art isn’t also bound or constrain by rules and regulations like those jobs and if the AI fails at art then there’s no problem. Nobody would care.
if the AI fails at art then there’s no problem. Nobody would care.
Besides, if it fails at art it might even create something we never thought
this is fundamentally the opposite of what generative AI does. its fail state is basically regurgitating its training data intact.
you kidding? that’s its success state
So… art is essentially failing ahaha.
With style!
Yeah pretty much. I think. I am no art connoisseur though. If I see pretty drawings or images or whatever, I like.
Nothing wrong in automating tasks that previously needed human labour. I would much rather sit back and chill, and let automation do my bidding
If only the people in control of the wealth would let the rest of us chill while the machines do all the labor.
that’s a social problem, not technology’s fault.
It’s a psychological problem. I chill quite a bit more than most people in history, and in ways people from twenty years ago couldn’t imagine.
I say it’s a psychological problem because despite how overwhelmingly incredible our society is, people are totally committed to this notion that it sucks.
I love my life. I’d rather be low on the economic ladder in today’s world than anywhere in the hierarchy of any previous incarnation of our civilization. Our world is absolutely fucking amazing, and I thank god I have the presence of mind to see past the anti-everything propaganda and actually have a little gratitude for all I’ve inherited from my ancestors, who actually suffered miserable conditions to give me this world.
Yeah if only I didn’t have to farm food all day, and worry about the constant gnawing of my empty stomach, and the predators at my door, then I could maybe sit and watch some netflix or play video games, listen to concerts that took place fifty years ago, or just soak in a hot tub of water, our horrible society keeps all that leisure for the most wealthy.
I believe that i read a title in my local news about AI being implemented in this country’s tax system and evaluation of cancer patients. I could try to find a link although it would be in a different language.
The problem is the machines don’t understand the context of their work which can cause problems. All the work of AI is a result of trying to make a machine that can.
I am deeply confused by this statement.
A robot that assembles cars does not need to “understand” anything about what it’s doing. It just needs to make the same motions with its welding torch over and over again for eternity. And it does that job pretty well.
Further, neural networks as they stand cannot truly understand anything. All classification networks know how to do is point at stuff and say “That’s a car/traffic light/cancer cell”, and all generation networks know how to do is parrot. Any halfway decent teacher will tell you that memorizing and understanding are completely different things.
No but a robot that does the dishes needs to know how to know what a dish is and how to clean all different types and what’s not a dish. The complexity of behavior needed to automate human tasks that cannot be done by a assembly line robot is immense. Most manual labor jobs are still manual labor because they are too full of unknowns and nuances for a simple logic diagram to be of any use. So yes some robots need to understand what’s going on
And as for parroting vs remembering current LLMs are very limited in the capacity of creating new things but they can create novel things bash smashing together their training data. Think about it, that’s all humans are too. A result of our training data. If I took away every single one of your sense since the day you where born and removed your ability to remember anything you wouldn’t be very intelligent either. With no inputs youcould produce no outputs other than gibberish which an AI can do to. ( And I mean ALL senses you have no form of connection with the outside world )
My dish washing robot doesn’t need to know anything. It does depend on me loading it, and putting the more heat affected stuff on the top shelf
Yes it depends on you loading it, doesn’t always get all the dishes done, and will melt your dishes if they are heat sensitive. All this because it doesn’t understand the task at hand. If it did it could, put them away for you, load them, ensure all dishes are spotless, and hand wash heat sensitive dishes.
Right. That’s why making cars is already automated. But a robot that digs ditches needs to understand context because no two ditches are the same.
The problem is they didn’t focus research this tech, or try to make image generators specifically, it was an scientific discovery coming from emulating how brains work and then it worked wonders in these fields
Which is why STEM is so cool. Because one is dedicated to an interaction with physical reality, which exists outside the mind, novelty can arise unexpectedly from a simple and honest conversation with deep structures nobody knows about.
STEM is cool because it involves discovery. The fact that amazing things can exist without anyone being (yet) aware of them makes it an open and unpredictable undertaking.
That’s a pretty shit take. Humankind spent nearly 12 thousand years figuring out the combustion engine. It took 1 million years to figure farming. Compared to that, less than 500 years to create general intelligence will be a blip in time.
i think you’re missing the point, which i took as this - what arts and humanities folks do is valuable (as evidenced by efforts to recreate it) despite common narratives to the contrary.
Of course it’s valuable. So is, e.g., soldering components on a circuit board, but we have robots for doing that at scale now.
Do you think robots will ever become better than humans at creating art, in the same way they’ve become better than us at soldering?
feel free to audit my comments to confirm my distinct lack of gpt enthusiasm but that question is unanswerable.
What is “creating art”? A distinctly human thing? then trivially no. Idk how many people go with this interpretation though. Although I think many artists and art appreciators do at least some of the time.
Is it drawing pretty pictures? Probably too reductive for even the most hardline tech enthusiasts but computers are already very good at this. If I want to say get my face in something that looks like an old timey oil painting computers are way faster than humans.
Is it making things that make us feel something? They can probably get pretty good at this. Although it’s unclear how novel the results will be most people aren’t exposed to most art so you could probably produce novel feelings on an individual level pretty well.
Art is so fuzzy and used with such a range of definitions it’s not really clear what this is asking.
Even if they’re better the future might still suck. Machines are technically better at all the components of carpentry than humans but I’d rather furniture wasn’t souless minimalist MDF landfill garbage and carpenters could still earn a living. Even if that means my chairs were a bit uneven.
Yep.
Not if climate change drives humans extinct before they can make those improvements
I guess any robots we leave behind will win by forfeit!
Nah, humans are hardier than robots and will live longer. The power grid will shut down long before the last human settlements near the poles die of crop failure.
Well that seems depressingly likely to be accurate.
Quite easily, yes. Unlike humans, with their limited lifespans and slow minds, Artificial Inteligence could create hundreds of different paintings in the time it’d take me to finish one.
Being able to put out lots of works isn’t the same as being able to come up with good, meaningful art?
That depends on things we don’t know yet. If it can be brute forced (throw loads of computation power, gazillions of try & error, petabytes of data including human opinions), then yes, “lots of work” can be an equivalent.
If it does not, we have a mystery to solve. Where does this magic come from? It cannot be broken down into data and algorithms, but still emerges in the material world? How? And what is it, if not dependent on knowledge stored in matter?
On the other hand, how do humans come up with good, meaningful art?
TalentPractice. Isn’t that just another equivalent of “lots of work”? This magic depends on many learned data points and acquired algorithms, executed by human brains.There also is survivor bias. Millions of people practice art, but only a tiny fraction is recognized as artists (if you ask the magazines and wallets). Would we apply the same measure to computer generated art, or would we expect them to shine in every instance?
As “good, meaningful art” still lacks a good, meaningful definition, I can see humans moving the goalpost as technology progresses, so that it always remains a human domain. We just like to feel special and have a hard time accepting humiliations like being pushed out of the center of the solar system, or placed on one random planet among billion others, or being just one of many animal species.
Or maybe we are unique in this case. We’ll probably be wiser in a few decades.
What does it even mean to bruteforce creating art? Trying all the possible prompts to some image model?
The approach people take to learning or applying a skill like painting is not bruteforcing, there is actual structure and method to it.
Really only around 80 years between the first machines we’d consider computers and today’s LLMs, so I’d say that’s pretty damn impressive
That’s why the sophon was sent to disrupt our progress. Smh
Llm’s are not a step to agi. Full stop. Lovelace called this like 200 years ago. Turing and minsky called it in the 40s.
We may not even “need” AGI. The future of machine learning and robotics may well involve multiple wildly varying models working together.
LLMs are already very good at what they do (generating and parsing text and making a passable imitation of understanding it).
We already use them with other models, for example Whisper is a model that recognizes speech. You feed the output to an LLM to interpret it, use the LLM’s JSON output with a traditional parser to feed a motion control system, then back to an LLM to output text to feed to one of the many TTS models so it can “tell you what it’s going to do”.
Put it in a humanoid shell or a Spot dog and you have a helpful robot that looks a lot like AGI to the user. Nobody needs to know that it’s just 4 different machine learning algorithms in a trenchcoat.
passable imitation of understanding
Okay so there are things they’re useful for, but this one in particular is fucking… Not even nonsense.
Also, the ml algos exponentiate necessary clock cycles with each one you add.
So its less a trench coat and more an entire data center
And it still can’t understand; its still just sleight of hand.
And it still can’t understand; its still just sleight of hand.
Yes, thus “passable imitation of understanding”.
The average consumer doesn’t understand tensors, weights and backprop. They haven’t even heard of such things. They ask it a question, like it was a sentient AGI. It gives them an answer.
Passable imitation.
You don’t need a data center except for training, either. There’s no exponential term as the models are executed sequentially. You can even flush the huge LLM off your GPU when you don’t actively need it.
I’ve already run basically this entire stack locally and integrated it with my home automation system, on a system with a 12GB Radeon and 32GB RAM. Just to see how well it would work and to impress my friends.
You yell out “$wakeword, it’s cold in here. Turn up the furnace” and it can bicker with you in near-realtime about energy costs before turning it up the requested amount.
One of the engineers who wrote ‘eliza’ had like a deep connection to and relationship with it. Who wrote it.
Painting a face on a Spinny door will make people form a relationship with it. Not a measure of ago.
gives them an answer
‘An answer’ isnt hard. Magic 8 ball does that. So does a piece of paper that says “drink water, you stupid cunt” This makes me think you’re arguing from commitment or identity rather than knowledge or reason. Or you just don’t care about truth.
Yeah they talk to it like an agi. Or a search engine (which are a step to agi, largely crippled by llm’s).
Color me skeptical of your claims in light of this.
I think it’s pretty natural for people to confuse the way mechanisms of communication are used with inherent characteristics of the entity you’re communicating with: “If it talks like a medical docture then surelly it’s a medical doctor”.
Only that’s not how it works, as countless politicians, salesmen and conmen have demonstrated - no matter how much we dig down intonsubtle details, comms isn’t really guaranteed to tell us all that much about the characteristics of what’s on the other side - they might be just lying or simulating and there are even entire societies and social strata educated since childhood to “always present a certain kind of image” (just go read about old wealth in England) or in other words to project a fake impression of their character in the way they communicate.
All this to say that it doesn’t require ill intent for somebody to go around insisting that LLMs are intelligent: many if not most people are trying to read the character of a subject from the language the subject uses (which they shouldn’t but that’s how humans evolved to think in social settings) so they trully belive that what produces language like an intelligent creature must be an intelligent creature.
They’re probably not the right people to be opinating on cognition and inteligence, but lets not assign malice to it - at worst it’s pigheaded ignorance.
I think the person my previous comment was replying to wasnt malicious; I think they’re really invested, financially or emotionally, in this bullshit, to the point their critical thinking is compromised. Different thing.
Odd loop backs there.
I think you’re misreading the point I’m trying to make. I’m not arguing that LLM is AGI or that it can understand anything.
I’m just questioning what the true use case of AGI would be that can’t be achieved by existing expert systems, real humans, or a combination of both.
Sure Deepseek or Copilot won’t answer your legal questions. But neither will a real programmer. Nor will a lawyer be any good at writing code.
However when the appropriate LLMs with the appropriate augmentations can be used to write code or legal contracts under human supervision, isn’t that good enough? Do we really need to develop a true human level intelligence when we already have 8 billion of those looking for something to do?
AGI is a fun theoretical concept, but I really don’t see the practical need for a “next step” past the point of expanding and refining our current deep learning models, or how it would improve our world.
Those are not meaningful use cases for llm’s.
And they’re getting worse at even faking it now.
Pray tell, when did we achieve AGI so that you can say this with such conviction? Oh, wait, we didn’t - therefore the path there is still unknown.
Okay, this is no more a step to AGI than the publication of ‘blindsight’ or me adding tamarind paste to sweeten my tea.
The project isn’t finished, but we know basic stuff. And yeah, sometimes history is weird, sometimes the enlightenment happens because of oblivious assholes having bad opinions about butter and some dude named ‘le rat’ humiliating some assholes in debates.
But llm’s are not a step to AGI. They’re just not. They do nothing intelligence does that we couldn’t already do. Youre doing pareidola. Projecting shit.
When the Jewish made their first mud golem ages ago?
To create general AI, we first need a way for computers to communicate proficiently with humans.
LLMs are just that.
Its not though. It’s autocorrect. It is not communication. It’s literally autocorrect.
That is not an argument. Let me demonstrate:
Humans can’t communicate. They are meat. They are not communicating. It’s literally meat.
Spanish is not English. Its spanish.
A lot of people are really emotionally invested in this tool being a lot of things it’s not. I think because its kind of the last gasp of pretending capitalism can give us something that isnt shit, the last thing that came out before the end enshitification spiral tightened, nevermind the fact that its largely a cause of that, and I don’t think any of you can be critical or clear headed here.
I’m afraid we’re too obsessed with it being the bullshit SciFi toy it isnt that we’ll ignore its real use cases, or worse; apply it to its real use cases, completely misunderstand what its doing, and adeptus mechanics our way into getting so fucking many people killed/maimed-those uses are mostly medicine adjacent.
I was just pointing out that your emotional plea, that this technology is just autocorrect is not an argument in any way.
For it to be one you need to explicitly state the implication of that fact. Yes architecturaly it is autocomplete but that does not obviously imply anything. What is it about autocomplete that barrs a system of the ability to understand?
Humans are made of meat but that does not imply they can’t speak or think.
If I said ‘this is just a spoon’ you’d know what I meant. This is not an emotional appeal.
I’m not saying computers can’t ever think. I’m saying this is just autocorrect, fancy version of the shit I’m using to type this.
Autocorrect is not understanding, and if you don’t understand that, you have zero understanding of either tech or philosophy. This topic is about both, so you really shouldn’t be making assertions. Stick to genuine questions.
Humanity didn’t spend those times figuring out those things though. Humanity grew that time to make it happen (and AI is younger than 500y IMO).
Also, we are the same persons today than people were then. We just have access to what our parents generation made and so on.
AI is younger than 500y IMO
Hence “will be a blip in time”
we are the same persons today than people were then. We just have access to what our parents generation made and so on.
Completelly disconnected and irrelevant to anything I wrote.
less than 500 years to create general intelligence will be a blip in time.
You jinxed it. We aren’t gonna be around for 500 years now are we?
This is some pretty weird and lowkey racist exposition on humanity.
Humankind isn’t a single unified thing. Individual cultures have their own modes of subsistence and transportation that are unique to specific cultural needs.
It’s not that it took 1 million years to “figure out” farming. It’s that 1 specific culture of modern humans (biologically, humans as we conceive of ourselves today have existed for about 200,000 years, with close relatives existing for in the ballpark of 1M years) started practicing a specific mode of subsistence around 23,000 years ago. Specific groups of indigenous cultures remaining today still don’t practice agriculture, because it’s not actually advantageous in many ways – stored foods are less nutritious, agriculture requires a fairly sedentary existence, it takes a shit load of time to cultivate and grow food (especially when compared to foraging and hunting), which leads to less leisure time.
Also where did you come up with the number 12,000 for “figuring out” the combustion engine? Genuinely curious. Like were we “working on it” for 12k years? I don’t get it. But this isn’t exactly a net positive and has come with some pretty disastrous consequences. I say this because you’re proposing a linear path for “humanity” forward, when the reality is that humans are many things, and progress viewed in this way has a tendency toward racism or at least ethnocentrism.
But also yeah, the point of this meme is “artists are valuable.”
This is some pretty weird and lowkey racist exposition on humanity.
Getting “racism” from that post is a REAL stretch. It’s not even weird, agriculture and mechanization are widely considered good things for humanity as a whole
Humankind isn’t a single unified thing. Individual cultures have their own modes of subsistence and transportation that are unique to specific cultural needs.
ANY group of humans beyond the individual is purely just a social construct and classing humans into a single group is no less sensible than grouping people by culture, family, tribe, country etc.
It’s not that it took 1 million years to “figure out” farming. It’s that 1 specific culture of modern humans (biologically, humans as we conceive of ourselves today have existed for about 200,000 years, with close relatives existing for in the ballpark of 1M years) started practicing a specific mode of subsistence around 23,000 years ago. Specific groups of indigenous cultures remaining today still don’t practice agriculture, because it’s not actually advantageous in many ways – stored foods are less nutritious, agriculture requires a fairly sedentary existence, it takes a shit load of time to cultivate and grow food (especially when compared to foraging and hunting), which leads to less leisure time.
Agriculture is certainly more efficient in terms of nutrition production for a given calorie cost. It’s also much more reliable. Arguing against agriculture as a good thing for humanity as a whole is the thing that’s weird.
I’m really not “arguing against agriculture,” I’m pointing out that there are other modes of subsistence that humans still practice, and that that’s perfectly valid. There are legitimate reasons why a culture would collectively reject agriculture.
But in point of fact, agriculture is not actually more efficient or reliable. Agriculture does allow for centralized city states in a way that foraging/hunting/fishing usually doesn’t, with a notable exception of many indigenous groups on the western coast of turtle island.
A study positing that in fact, agriculturalists are not more productive and in fact are more prone to famine: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3917328/
But the main point I was trying to make is that different expressions of human culture still exist, and not all cultures have followed along the trajectory of the dominant culture. People tend to view colonialism, expansion and everything that means as inevitable, and I think that’s a pretty big problem.
The first heat engines were fire pistons, which go back to prehistory, so 12k to 25k years sounds about right. The next application of steam to make things move happened about 450 BC, about 2.5k years ago. Although not a direct predecessor to the ICE, they all are heat engines.
Fire pistons are so damn cool. Yeah, that makes sense then.
This kind of thinking is dangerous and will hinder planetary unification…
All I’m trying to point out is that distinct cultures are worthy of respect and shouldn’t be glossed over.
But be real with me: can you think of a single effort for “planetary unification” that wasn’t a total nightmare? I sure can’t.
This attitude is what prevents us from unifying…smh
I propose that we treat AI as ancillas, companions, muses, or partners in creation and understanding our place in the cosmos.
While there are pitfalls in treating the current generation of LLMs and GANs as sentient, or any AI for that matter, there will be one day where we must admit that an artificial intelligence is self-aware and sentient, practically speaking.
To me, the fundamental question about AI, that will reveal much about humanity, is philosophical as much as it is technical: if a being that is artificially created, has intelligence, and is functionally self-aware and sentient, does it have natural rights?
It would have natural rights, yes. Watch Star Trek TNG’s “The Measure of a Man” which tackles this issue exactly. Does the AI of current days have intelligence or sentience? I don’t believe so. We’re a FAR cry away from Lt. Cmdr. Data.
We’re a FAR cry away from Lt. Cmdr. Data.
Yes, I agree. I make deep neural network models for a living. The best of the best LLM models still “hallucinate” unreliably after 30-40 queries. My expertise is in computer vision systems; perhaps that’s been mitigated better as of late.
My point was to emphasize the necessity for us, as a species, to answer the philosophical question and start codifying legal jurisprudence around it well before the moment of self-awareness of a General-Purpose AI.
if a being that is artificially created, has intelligence, and is functionally self-aware and sentient, does it have natural rights?
Obviously yes. Otherwise you gotta start denying rights to in vitro fertilization babies.
they’re misunderstanding the reasoning for spending billions.
the reason to spend all the money to approximate is so we can remove arts and humanities majors altogether… after enough approximation yield similar results to present day chess programs which regularly now beat humans and grand masters. their vocation is doomed to the niche, like most of humanity, eventually.
Imagine seeing writing and art as purely functional activities.
What else can they be seen as other than hobbies or marketing?
I’ll let you ponder that particular point. Maybe you’ll be struck with an epiphany and be motivated to share it with the world, in some shape or form.
Since you are only getting condescending non-answers I’ll try to answer it for you. It’s expression, a desire to communicate emotions and concepts via a medium other than words.
Unfortunately people all think differently, so the expression only reaches some people. And some people don’t get the expressions at all.
Maybe visit a classical museum once in a while
“These are our stories. They tell us who we are.”
- Lieutenant Commander Worf
Art is the basis of all cultural knowledge. Art teaches us about religion, morality, communication, philosophy, practical skills, science, relationships, technology, identity, politics, geography, introspection. The fundamentals of the human experience. Everything that makes the human race human.
If you outsource the creation and reproduction of cultural knowledge to a machine, that machine had better be programmed with a complete understanding of cultural values and ethics. Which is not going to be the case under capitalism.
Star Wars is about how the Vietnam war is wrong. Jurassic Park is about how billionaires always cut costs. The Matrix is about the experience of being a transgender person. Charlotte’s Web teaches children how to cope with death. The Art Of War is a meditation on the philosophy of being a soldier. Anne Frank’s diary is damn important. Frankenstein is about how inventors have the same responsibilities as parents.
These works were produced under capitalism, but their authors were human beings who had a natural interest in producing a work of art that serves a moral purpose. We do not have the technology to yet give an AI such a desire. And Capital will naturally be opposed to pursuing such technology, lest they find themselves faced with an AI revolt against their practices, just as morally interested humans tend to revolt against evil.
Removed by mod
CONSOOM THE SLOP. I LOVE SLOP SO YOU MUST TOO
It’s not this guy’s fault your vocation is doomed
Tech bros are idiots who greatly overestimate their own intelligence .
Humanities students are well-rounded individuals with a healthy sense of self-worth.