edit to clarify a misconception in the comments, this is an instagram post so “caption” refers to the description under the image or video

as an example, this text i am typing now is also a “caption”

just saying because someone started a debate misunderstanding this to be about subtitles (aka “closed captions”) and that’s just not the case 👍

    • Turret3857@infosec.pub
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 months ago

      It definitely is. As someone who actually struggles with severe ADHD this comment makes my piss boil.

      • Natanox@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        I second that, this person is actually just lazy. I got ADHD and I always add fucking alt text, it’s part of the normal post routine no matter if I took my meds or not. And it’s not like you can’t edit it into posts if you clicked send too quickly.

        I’d even argue it makes your social media experience better. Forces awareness to what you do, gives you time to reflect on your post.

        • Una@europe.pub
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 months ago

          There was someone on tiktok defending AI “art”, who says that he has ADHD and how it is hard for him to concentrate on art and how AI makes his life “easier” by allowing him to feel like he did something, don’t remember exactly but it was something like that. But he also forgot how many disabled people there are, with different disabilities, and still be able to make like perfect art. He also mentioned how he wasn’t born with talent, not like talent doesn’t really exist.

          • HopeOfTheGunblade@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            Have ADHD, picking up a pencil intermittently when we have the executive function. Shit’s harder for us but come on.

            We’d mind a lot less if people treated it like getting a commission. Sure, it’s cool that there’s art of your character, but you didn’t do the drawing, you just gave some specifics.

  • MeaanBeaan@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    If you’re capable enough to bitch about being too disabled to use your brain. You’re capable enough to write your own caption.

  • Resistai@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    Disabled people using their disability as a reason to defend ai but not acknowledging that disabled people will be the first to suffer when it comes to the climate crisis, water crisis, displacement, lack of privacy, and all kinds of inequity. Ai is not here to help disabled people, its here to further capitalist billionaire goals.

    • Jack@slrpnk.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      No, what you are thinking of is speech to text software, it is much older than LLMs and works in a very different way.

      • thejoker954@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        While speech to text software indeed predates LLMs - LLMs do it as well. I’ve only tried a few basic (aka free) options so no idea how well they do en masse, but the generated results were at least on par if not better than YouTubes’ auto caption.

        It might not technically be LLMs though. It could be a different type of “ai”. I Just cant stand the “ai” marketing when nothing they are making is actually ai so until they pull their heads out their asses all “ai” models are LLMs to me.

        • Jack@slrpnk.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          Understandable, AI marketing now is a shitshot, but they are not even AI I think. Just people forget that tech used to do magic before AI existed.

          • LwL@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            It’s kind of the other way around, we’ve always had AI, it used to just basically mean a computer making some decision based on data. Like a thermostat changing the heating in response to a temperature change.

            Then we got LLMs and because they are good at pretending to have complex reasoning ability, AI as a term started to always mean “computer with near human level intelligence” which of course they are absolutely not.

            • Jack@slrpnk.net
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              There was a book I can’t remember, the whole thesis was exactly that. “AI is whatever automates the decision making process” not any group of algos

          • ButteryMonkey@piefed.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            This is a big part of it. Back when ai was first becoming big, my manager said they needed to run all my kb articles through an ai to generate link clouds or some such.

            I was like umm… that’s a service this platform has always offered…? Like just because you don’t know what the kb tools do, or what our rock bottom subscription gets us, doesn’t mean I haven’t looked into it… but that also isn’t worth doing because now we only have a handful of articles in any given category because I’m good at my job…

        • oplkill@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          2 months ago

          Nope, they still not good. I using YouTube auto gen subs and they 100% need LLM to fix mistakes.

          • AnarchoEngineer@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            Large language models are designed to generate text based on previous text. Translation from audio to text can be done via a neural net but it isn’t a Large Language Model.

            Now, could you combine the two to say reduce error on words that were mumbled by having a generative model predict the words that would fit better in that unclear sentence. However you could likely get away with a much smaller and faster net than an LLM in fact you might be able to get away with using plain-Jane markov chains, no machine learning necessary.

            Point is that there is a difference between LLMs and other neural nets that produce text.

            In the case of audio to text translation, using an LLM would be very inefficient and slow (possibly to the point it isn’t able to keep up with the audio at all), and using a very basic text generation net or even just a probabilistic algorithm would likely do the job just fine.

          • Ziglin (it/they)@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            How would an llm fix a mistake equivalent to something being misheard? I feel like you’re misunderstanding something and could probably also use some help with your English.

    • RushLana@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      As someone who use a screen reader daily, absolutly the fuck not.

      LLMs will invent things out of tin air and ruin any comprehesion. It waste my time rather than help me.

      • thejoker954@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        2 months ago

        If you use any generic LLM then yes, but there are LLMs (like i said in another reply - its prrobably not a LLM - but as there is no ‘real’ ai that’s what I’m calling all this ai bullshit) That are trained specifically for captioning/transcripts, just not necessarily done in real time.

        Doing it “live” is what increases the error rate.

        • leftytighty@slrpnk.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          LLMs are large language models, they’re a specialized category of artificial neural network, which are a way of doing machine learning. All of those topics are under the academic computer science discipline of artificial intelligence.

          AI, neural net, or ML model are all way more accurate to say than LLM in this case.

    • spujb@lemmy.cafeOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 months ago

      to clarify we are talking about a post caption, not closed captions.

      that is, the text you put in the description of an image or video post.