• Vanth@reddthat.com
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    2
    ·
    7 months ago

    Is environmental impact on the top of anyones list for why they don’t like ChatGPT? It’s not on mine nor on anyones I have talked to.

    The two most common reasons I hear are 1) no trust in the companies hosting the tools to protect consumers and 2) rampant theft of IP to train LLM models.

    The author moves away from strict environmental focus despite claims to the contrary in their intro,

    This post is not about the broader climate impacts of AI beyond chatbots, or about whether AI is bad for other reasons

    […]

    Other Objections, This is all a gimmick anyway. Why not just use Google? ChatGPT doesn’t give better information

    … yet doesn’t address the most common criticisms.

    Worse, the author accuses anyone who pauses to think of the negatives of ChatGPT of being absurdly illogical.

    Being around a lot of adults freaking out over 3 Wh feels like I’m in a dream reality. It has the logic of a bad dream. Everyone is suddenly fixating on this absurd concept or rule that you can’t get a grasp of, and scolding you for not seeing the same thing. Posting long blog posts is my attempt to get out of the weird dream reality this discourse has created.

    IDK what logical fallacy this is but claiming people are “freaking out over 3Wh” is very disingenuous.

    Rating as basic content: 2/10, poor and disingenuous argument

    Rating as example of AI writing: 5/10, I’ve certainly seen worse AI slop

    • anus@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      10
      ·
      7 months ago

      Thank you for your considered and articulate comment

      What do you think about the significant difference in attitude between comments here and in (quite serious) programming communities like https://lobste.rs/s/bxixuu/cheat_sheet_for_why_using_chatgpt_is_not

      Are we in different echo chambers? Is chatgpt a uniquely powerful tool for programmers? Is social media a fundamentally Luddite mechanism?

      • Rooki@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 months ago

        I would say GitHub copilot ( that uses a gpt model ) uses more Wh than chatgpt, because it gets blasted more queries on average because the “AI” autocomplete just triggers almost every time you stop typing or on random occasions.

  • carrion0409@lemm.ee
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    3
    ·
    7 months ago

    Everytime I see a post like this I lose a little more faith in humanity

  • Beppe_VAL@feddit.it
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    2
    ·
    7 months ago

    Even Sam Altman acknowledged last year the huge amount of energy needed by chatgpt, and the need for a breakthrough in energy breakthrough…

    • anus@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      13
      ·
      7 months ago

      Do you hold Sam Altman’s opinion higher than the reasoning here? In general or just on this particular take?

      • Neverclear@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 months ago

        What would Altman gain from overstating the environmental impact of his own company?

        What if power consumption is not so much limited by the software’s appetite, but rather by the hardware’s capabilities?

        • anus@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          7 months ago

          What would Altman gain from overstating the environmental impact of his own company?

          You should consider the possibility that CEOs of big companies essentially always think very hard about how to talk about everything so that it always benefits them

          I can see the benefits, I can try to explain if you’re actually interested

  • Dekkia@this.doesnotcut.it
    link
    fedilink
    English
    arrow-up
    6
    ·
    7 months ago

    I struggle to see why numerous scientists (and even Sam ‘AI’ Altman himself) would be wrong about this but a random substack post holds the truth.

    • anus@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      7 months ago
      1. Have you read the post?

      2. If you’d like to refute the content on the grounds of another scientist, can you please provide a reference? I will read it

  • jonathan@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    ChatGPT energy costs are highly variable depending on context length and model used. How have you factored that in?

    • Ace@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      7 months ago

      sure, but I tried this and they all suck. I only have 8GB of ram so can only use the smaller versions of the models, and they’re much much worse at just making up random shit

      • NeilBrü@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        Oof, ok, my apologies.

        I am, admittedly, “GPU rich”; I have at ~48GB of VRAM at my disposal on my main workstation, and 24GB on my gaming rig. Thus, I am using Q8 and Q6_L quantized GGUFs.

        Naturally, my experience with the “fidelity” of my LLM models re: hallucinations would be better.

    • anus@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      7 months ago

      I actually think that (presently) self hosted LLMs are much worse for hallucination