• Ech@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    This is focusing on the wrong thing. Electricity demands should be expected to drastically increase, with or without llms or other such programs. We need to be focusing on electrifying pretty much everything if we’re going to make a dent on carbon emissions, which will naturally lead to a significant increase in power demands. If that only leads to different and/or more carbon emissions, that’s a problem with the infrastructure of the grid, not what it’s powering.

    And to be clear, I think these companies using stupid amounts of power to run these things is stupid as hell, but blaming them for problems that should have been addressed ages ago isn’t going to solve the problem. We need massive and sweeping infrastructure changes asap.

  • stoy@lemmy.zip
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    Give me an N!

    N!

    Give me a U!

    U!

    Give me a C!

    C!

    Give me an L!

    L!

    Give me an E!

    E!

    Give me an A!

    A!

    Give me an R!

    R!

    What does that make?

    NUCLEAR!

  • technocrit@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    2 months ago

    When they say AI might destroy humanity, it’s not due to some Terminator scenario…

    It’s just cause they’re trashing the planet.

  • maniii@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    2 months ago

    AI seems to be just more and more statistical probabilities hashed-out at record-breaking speeds and power-consumption of computing. Its like that adage, “sufficiently advanced that it is magic” we are doing that for AI. We are building more and more complex statistical analysis engines that spew out near-perfect answers from garbage inputs at the expense of actual analysis ,research and development.

    • PriorityMotif@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      You could train a model on all available research and use that to find holes that haven’t been explored in a way that no human possibly could.

      • chuckleslord@lemmy.world
        link
        fedilink
        arrow-up
        0
        arrow-down
        1
        ·
        2 months ago

        You could do that training… and the “AI” can print out some lines that have the same writing styles as the articles. Because that’s all LLMs can do. Don’t buy the hype, they’re just energy sucking predictive text bots. Nothing more. The whole thing is a dead end as far as finding the actual systems behind intelligence.

      • maniii@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        2 months ago

        someone has to find the holes to input into the AI training data :'D so i guess that wont work.

      • chuckleslord@lemmy.world
        link
        fedilink
        arrow-up
        0
        arrow-down
        1
        ·
        2 months ago

        That requires intelligence to determine. “AI” ain’t got none of that. It can tell you a recipe for muffins that definitely is probably edible, except for the obvious poisons.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        0
        arrow-down
        1
        ·
        edit-2
        2 months ago

        You could build a rocket in your backyard, fly to Mars, and come back to tell humanity about your trip.

  • pyre@lemmy.world
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    2 months ago

    we decided to slow plans to retire something no one wants to support something else that no one wants.