edit to clarify a misconception in the comments, this is an instagram post so “caption” refers to the description under the image or video
as an example, this text i am typing now is also a “caption”
just saying because someone started a debate misunderstanding this to be about subtitles (aka “closed captions”) and that’s just not the case 👍
This feels like the weaponization of disability rights language but I’m not sure.
It definitely is. As someone who actually struggles with severe ADHD this comment makes my piss boil.
I second that, this person is actually just lazy. I got ADHD and I always add fucking alt text, it’s part of the normal post routine no matter if I took my meds or not. And it’s not like you can’t edit it into posts if you clicked send too quickly.
I’d even argue it makes your social media experience better. Forces awareness to what you do, gives you time to reflect on your post.
There was someone on tiktok defending AI “art”, who says that he has ADHD and how it is hard for him to concentrate on art and how AI makes his life “easier” by allowing him to feel like he did something, don’t remember exactly but it was something like that. But he also forgot how many disabled people there are, with different disabilities, and still be able to make like perfect art. He also mentioned how he wasn’t born with talent, not like talent doesn’t really exist.
Have ADHD, picking up a pencil intermittently when we have the executive function. Shit’s harder for us but come on.
We’d mind a lot less if people treated it like getting a commission. Sure, it’s cool that there’s art of your character, but you didn’t do the drawing, you just gave some specifics.
If you’re capable enough to bitch about being too disabled to use your brain. You’re capable enough to write your own caption.
Disabled people using their disability as a reason to defend ai but not acknowledging that disabled people will be the first to suffer when it comes to the climate crisis, water crisis, displacement, lack of privacy, and all kinds of inequity. Ai is not here to help disabled people, its here to further capitalist billionaire goals.
ADHDer here. fuck this kid trying to throw us under the bus so they can excuse their sheer lack of a fuck to give.
Word
I tried using chatgpt for a caption once, it was horrible.
Subtitles is a perfect use case for LLMs.
No, what you are thinking of is speech to text software, it is much older than LLMs and works in a very different way.
While speech to text software indeed predates LLMs - LLMs do it as well. I’ve only tried a few basic (aka free) options so no idea how well they do en masse, but the generated results were at least on par if not better than YouTubes’ auto caption.
It might not technically be LLMs though. It could be a different type of “ai”. I Just cant stand the “ai” marketing when nothing they are making is actually ai so until they pull their heads out their asses all “ai” models are LLMs to me.
Understandable, AI marketing now is a shitshot, but they are not even AI I think. Just people forget that tech used to do magic before AI existed.
It’s kind of the other way around, we’ve always had AI, it used to just basically mean a computer making some decision based on data. Like a thermostat changing the heating in response to a temperature change.
Then we got LLMs and because they are good at pretending to have complex reasoning ability, AI as a term started to always mean “computer with near human level intelligence” which of course they are absolutely not.
There was a book I can’t remember, the whole thesis was exactly that. “AI is whatever automates the decision making process” not any group of algos
This is a big part of it. Back when ai was first becoming big, my manager said they needed to run all my kb articles through an ai to generate link clouds or some such.
I was like umm… that’s a service this platform has always offered…? Like just because you don’t know what the kb tools do, or what our rock bottom subscription gets us, doesn’t mean I haven’t looked into it… but that also isn’t worth doing because now we only have a handful of articles in any given category because I’m good at my job…
Yeah speech to text models have nothing to do with LLMs and their use for captioning is perfectly fine imo
Nope, they still not good. I using YouTube auto gen subs and they 100% need LLM to fix mistakes.
Large language models are designed to generate text based on previous text. Translation from audio to text can be done via a neural net but it isn’t a Large Language Model.
Now, could you combine the two to say reduce error on words that were mumbled by having a generative model predict the words that would fit better in that unclear sentence. However you could likely get away with a much smaller and faster net than an LLM in fact you might be able to get away with using plain-Jane markov chains, no machine learning necessary.
Point is that there is a difference between LLMs and other neural nets that produce text.
In the case of audio to text translation, using an LLM would be very inefficient and slow (possibly to the point it isn’t able to keep up with the audio at all), and using a very basic text generation net or even just a probabilistic algorithm would likely do the job just fine.
How would an llm fix a mistake equivalent to something being misheard? I feel like you’re misunderstanding something and could probably also use some help with your English.
[…]could probably also use some help with your English.
what the actual fluff is up with lemmy.world accounts in this thread acting like jerks?
lemmy.world accounts acting like jerks
many such cases
As someone who use a screen reader daily, absolutly the fuck not.
LLMs will invent things out of tin air and ruin any comprehesion. It waste my time rather than help me.
If you use any generic LLM then yes, but there are LLMs (like i said in another reply - its prrobably not a LLM - but as there is no ‘real’ ai that’s what I’m calling all this ai bullshit) That are trained specifically for captioning/transcripts, just not necessarily done in real time.
Doing it “live” is what increases the error rate.
LLMs are large language models, they’re a specialized category of artificial neural network, which are a way of doing machine learning. All of those topics are under the academic computer science discipline of artificial intelligence.
AI, neural net, or ML model are all way more accurate to say than LLM in this case.
Crunchyroll really messed up their subs with AI. Not sure if they mean LLMs and are just calling it AI but still:
Kept wondering why subtitles were so obviously off when I was watching some stuff. It was horrid.
Automatic subtitles like on YouTube use Machine Learning, NOT a Large Language Model.
Fuck no.
to clarify we are talking about a post caption, not closed captions.
that is, the text you put in the description of an image or video post.