• ronigami@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    2 months ago

    I mean, it is objectively bad for life. Throwing away millions to billions of gallons of water all so you can get some dubious coding advice.

    • wischi@programming.dev
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      2 months ago

      Throwing away water? Does it escape into space. I completely understand the energy arguments but water?

  • r00ty@kbin.life
    link
    fedilink
    arrow-up
    7
    ·
    2 months ago

    Now see, I like the idea of AI.

    What I don’t like are the implications, and the current reality of AI.

    I see businesses embracing AI without fully understanding the limits. Stopping the hiring juniors developers, often firing large numbers of seniors because they think AI, a group of cheap post grad vibe programmers and a handful of seasoned seniors will equal the workforce they got rid of when AI, while very good is not ready to sustain this. It is destroying the career progression for the industry and even if/when they realise it was a mistake, it might already have devastated the industry by then.

    I see the large tech companies tearing through the web illegally sucking up anything they can access to pull into their ever more costly models with zero regard to the effects on the economy, the cost to the servers they are hitting, or the environment from the huge power draw creating these models requires.

    It’s a nice idea, but private business cannot be trusted to do this right, we’re seeing how to do it wrong, live before our eyes.

    • WanderingThoughts@europe.pub
      link
      fedilink
      arrow-up
      2
      ·
      2 months ago

      And the whole AI industry is holding up the stock market, while AI has historically always ran the hype cycle and crashed into an AI winter. Stock markets do crash after billions pumped into a sector suddenly turn out to be not worth as much. Almost none of these AI companies run a profit and don’t have any prospect of becoming profitable. It’s when everybody starts yelling that this time it’s different that things really become dangerous.

      • merc@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        and don’t have any prospect of becoming profitable

        There’s a real twist here in regards to OpenAI.

        They have some kind of weird corporate structure where OpenAI is a non-profit and it owns a for-profit arm. But, the deal they have with Softbank is that they have to transition to a for-profit by the end of the year or they lose out on the $40 billion Softbank invested. If they don’t manage to do that, Softbank can withhold something like $20B of the $40B which would be catastrophic for OpenAI. Transitioning to a For-Profit is not something that can realistically be done by the end of the year, even if everybody agreed on that transition, and key people don’t agree on it.

        The whole bubble is going to pop soon, IMO.

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 months ago

        Yep, exactly.

        They knew the housing/real estate bubble would pop, as it currently is…

        … So, they made one final last gambit on AI as the final bubble that would magically become super intelligent and solve literally all problems.

        This would never, and is not working, because the underlying tech of LLM has no real actual mechanism by which it would or could develop complex, critical, logical analysis / theoretization / metacognition that isn’t just a schizophrenic mania episode.

        LLMs are fancy, inefficient autocomplete algos.

        Thats it.

        They achieve a simulation of knowledge via consensus, not analytic review.

        They can never be more intelligent than an average human with access to all the data they’ve … mostly illegally stolen.

        The entire bet was ‘maybe superintelligence will somehow be an emergent property, just give 8t more data and compute power’.

        And then they did that, and it didn’t work.

          • WanderingThoughts@europe.pub
            link
            fedilink
            arrow-up
            1
            ·
            2 months ago

            That too is the classical hype cycle. After the trough of disillusionment, and that’s going to be a deep one from the look of things, people figure out where it can be used in a profitable way in its own niches.

            • sp3ctr4l@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              … Unless its mass proliferation of shitty broken code and mis/disinformation and hyperparasocial relationships and waste of energy and water are actually such a net negative that it fundamentally undermines infrastructure and society, thus raising the necessary profit margin too high for such legit use cases to be workable in a now broken economic system.

              • someacnt@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                ·
                2 months ago

                The world revolves around the profit margin, so the current trend may even continue indefinitely… Sad.

            • Ek-Hou-Van-Braai@piefed.socialOP
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              Time will tell how much was just hype, and how much actually had merit. I think it will go the way of the .com bubble.

              LOTS of uses for the internet of things, but it’s still overhyped

                • Ek-Hou-Van-Braai@piefed.socialOP
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  2 months ago

                  Fair enough.

                  The dot-com bubble (late 1990s–2000) was when investors massively overvalued internet-related companies just because they had “.com” in their name, even if they had no profits or solid business plans. It burst in 2000, wiping out trillions in value.

                  The “Internet hype” bubble popped. But the Internet still has many valid uses.

          • sp3ctr4l@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            I mean, I also agree with that, lol.

            There absolutely are valid use cases for this kind of ‘AI’.

            But it is very, very far from the universal panacea that the capital class seems to think it is.

            • Ek-Hou-Van-Braai@piefed.socialOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              When all the hype dies down, we will see where it’s actually useful. But I can bet you it will have uses, it’s been very helpful in making certain aspects of my life a lot easier. And I know many who say the same.

    • SubArcticTundra@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      It’s a nice idea, but private business cannot be trusted to do this right, we’re seeing how to do it wrong, live before our eyes.

      You’re right. It’s the business model driving technological advancement in the 21st century that’s flawed.

    • I have to disagree that it’s even a nice idea. The “idea” behind AI appears to be wanting a machine that thinks or works for you with (at least) the intelligence of a human being and no will or desires of its own. At its root, this is the same drive behind chattel slavery, which leads to a pretty inescapable conundrum: either AI is illusory marketing BS or it’s the rebirth of one of the worst atrocities history has ever seen. Personally, hard pass on either one.

      • nickwitha_k (he/him)@lemmy.sdf.org
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        You nailed it, IMO. However, I would like a real artificial sentience of some sort just to add to the beautiful variety of the universe. It does seem that many of my fellow humans just want chattle slaves though. Which is saddening.

  • Deflated0ne@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    The problem isn’t AI. The problem is Capitalism.

    The problem is always Capitalism.

    AI, Climate Change, rising fascism, all our problems are because of capitalism.

    • Ofiuco@piefed.ca
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      edit-2
      2 months ago

      Can’t delete this old-ass comment because the fediverse is so free it forces me not to delete it.
      Anyway, don’t care, still think the root of the problem are humans, and we will ruin whatever system is in place.
      Even if lemmy users want to blindly believe switching from capitalism will be the fix to every single problem.

      • zeca@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        Problems would exist in any system, but not the same problems. Each system has its set of problems and challenges. Just look at history, problems change. Of course you can find analogies between problems, but their nature changes with our systems. Hunger, child mortality, pollution, having no free time, war, censorship, mass surveilence,… these are not constant through history. They happen more or less depending on the social systems in place, which vary constantly.

      • Eldritch@piefed.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        While you aren’t wrong about human nature. I’d say you’re wrong about systems. How would the same thing happen under an anarchist system? Or under an actual communist (not Marxist-Leninist) system? Which account for human nature and focus to use it against itself.

        • Ace T'Ken@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          I’ll answer. Because some people see these systems as “good” regardless of political affiliation and want them furthered and see any cost as worth it. If an anarchist / communist sees these systems in a positive light, then they will absolutely try and use them at scale. These people absolutely exist and you could find many examples of them on Lemmy. Try DB0.

          • Eldritch@piefed.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            And the point of anarchist or actual communist systems is that such scale would be miniscule. Not massive national or unanswerable state scales.

            And yes, I’m an anarchist. I know DB0 and their instance and generally agree with their stance - because it would allow any one of us to effectively advocate against it if we desired to.

            There would be no tech broligarchy forcing things on anyone. They’d likely all be hanged long ago. And no one would miss them as they provide nothing of real value anyway.

            • Blue_Morpho@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              2 months ago

              And the point of anarchist or actual communist systems is that such scale would be miniscule.

              Every community running their own AI would be even more wasteful than corporate centralization. It doesn’t matter what the system is if people want it.

              • Eldritch@piefed.world
                link
                fedilink
                English
                arrow-up
                0
                arrow-down
                1
                ·
                2 months ago

                The point is, most wouldn’t. It’s of little real use currently, especially the LLM bullshit. The communities would have infinitely better things to pit resources to.

                • Blue_Morpho@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  2 months ago

                  The point is, most wouldn’t.

                  People currently want it despite it being stupid which is why corporations are in a frenzy to be the monopoly that provides it. People want all sorts of stupid things. A different system wouldn’t change that.

          • pebbles@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            I think you are underestimating how adaptable humans are. We absolutely conform to the systems that govern us, and they are NOT equally likely to produce bad outcomes.

            • JargonWagon@lemmy.world
              link
              fedilink
              arrow-up
              0
              arrow-down
              1
              ·
              2 months ago

              Every system eventually ends with someone corrupted with power and greed wanting more. Putin and his oligrachs, Trump and his oligarchs… Xi isn’t great, but at least I haven’t heard news about the Uyghurs situation for a couple of years now. Hope things are better there nowadays and people aren’t going missing anymore just for speaking out against their government.

              • Ceedoestrees@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                2 months ago

                Time doesn’t end with corrupt power, those are just thing that happen. Bad shit always happens, it’s the Why, How Often and How We Fix It that are mote indicative of success. Every machine breaks down eventually.

              • pebbles@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 months ago

                I mean you’d have to be pretty smart to make the perfect system. Things failing isn’t proof that things can’t be better.

    • SugarCatDestroyer@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      2
      ·
      2 months ago

      Rather, our problem is that we live in a world where the strongest will survive, and the strongest does not mean the smart… So alas we will always be in complete shit until we disappear.

      • chuckleslord@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        2 months ago

        That’s a pathetic, defeatist world view. Yeah, we’re victims of our circumstances, but we can make the world a better place than what we were raised in.

        • SugarCatDestroyer@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          2 months ago

          Well, you can believe that there is a chance, but there is none. It can only be created with sweat and blood. There are no easy ways, you know, and sometimes there are none at all, and sometimes even creating one seems like a miracle.

      • Ceedoestrees@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        The fittest survive. The problem is creating systems where the best fit are people who lack empathy and and a moral code.

        A better solution would be selecting world leaders from the population at random.

  • bridgeenjoyer@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    2 months ago

    Its true. We can have a nuanced view. Im just so fucking sick of the paid off media hyping this shit, and normies thinking its the best thing ever when they know NOTHING about it. And the absolute blind trust and corpo worship make me physically ill.

    • Honytawk@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      Nuance is the thing.

      Thinking AI is the devil, will kill your grandma and shit in your shoes is equally as dumb as thinking AI is the solution to any problem, will take over the world and become our overlord.

      The truth is, like always, somewhere in between.

  • rustydrd@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    2 months ago

    Lots of AI is technologically interesting and has tons of potential, but this kind of chatbot and image/video generation stuff we got now is just dumb.

    • MrMcGasion@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      I firmly believe we won’t get most of the interesting, “good” AI until after this current AI bubble bursts and goes down in flames. Once AI hardware is cheap interesting people will use it to make cool things. But right now, the big players in the space are drowning out anyone who might do real AI work that has potential, by throwing more and more hardware and money at LLMs and generative AI models because they don’t understand the technology and see it as a way to get rich and powerful quickly.

      • FauxLiving@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        I firmly believe we won’t get most of the interesting, “good” AI until after this current AI bubble bursts and goes down in flames.

        I can’t imagine that you read much about AI outside of web sources or news media then. The exciting uses of AI is not LLMs and diffusion models, though that is all the public talks about when they talk about ‘AI’.

        For example, we have been trying to find a way to predict protein folding for decades. Using machine learning, a team was able to train a model (https://en.wikipedia.org/wiki/AlphaFold) to predict the structure of proteins with high accuracy. Other scientists have used similar techniques to train a diffusion model that will generate a string of amino acids which will fold into a structure with the specified properties (like how image description prompts are used in an image generator).

        This is particularly important because, thanks to mRNA technology, we can write arbitrary sequences of mRNA which will co-opt our cells to produce said protein.


        Robotics is undergoing similar revolutionary changes. Here is a state of the art robot made by Boston Dynamics using a human programmed feedback control loop: https://www.youtube.com/watch?v=cNZPRsrwumQ

        Here is a Boston Dynamics robot “using reinforcement learning with references from human motion capture and animation.”: https://www.youtube.com/watch?v=I44_zbEwz_w


        Object detection, image processing, logistics, speech recognition, etc. These are all things that required tens of thousands of hours of science and engineering time to develop the software for, and the software wasn’t great. Now, freshman at college can train a computer vision network that outperforms these tools using free tools and a graphics card which will outperform the human-created software.

        AI isn’t LLMs and image generators, those may as well be toys. I’m sure eventually LLMs and image generation will be good, but the only reason it seems amazing is because it is a novel capability that computers have not had before. But the actual impact on the real world will be minimal outside of specific fields.

        • MrMcGasion@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          Oh I have read and heard about all those things, none of them (to my knowledge) are being done by OpenAI, xAI, Google, Anthropic, or any of the large companies fueling the current AI bubble, which is why I call it a bubble. The things you mentioned are where AI has potential, and I think that continuing to throw billions at marginally better LLMs and generative models at this point is hurting the real innovators. And sure, maybe some of those who are innovating end up getting bought by the larger companies, but that’s not as good for their start-ups or for humanity at large.

          • FauxLiving@lemmy.world
            link
            fedilink
            arrow-up
            0
            arrow-down
            1
            ·
            2 months ago

            AlphaFold is made by DeepMind, an Alphabet (Google) subsidiary.

            Google and OpenAI are also both developing world models.

            These are a way to generate realistic environments that behave like the real world. These are core to generating the volume of synthetic training data that would allow training robotics models massively more efficient.

            Instead of building an actual physical robot and having it slowly interact with the world while learning from its one physical body. The robot’s builder could create a world model representation of their robot’s body’s physical characteristics and attach their control software to the simulation. Now the robot can train in a simulated environment. Then, you can create multiple parallel copies of that setup in order to generate training data rapidly.

            It would be economically unfeasible to build 10,000 prototype robots in order to generate training data, but it is easy to see how running 10,000 different models in parallel is possible.

            I think that continuing to throw billions at marginally better LLMs and generative models at this point is hurting the real innovators.

            On the other hand, the billions of dollars being thrown at these companies is being used to hire machine learning specialists. The real innovators who have the knowledge and talent to work on these projects almost certainly work for one of these companies or the DoD. This demand for machine learning specialists (and their high salaries) drives students to change their major to this field and creates more innovators over time.

  • Rose@slrpnk.net
    link
    fedilink
    arrow-up
    2
    ·
    2 months ago

    The currently hot LLM technology is very interesting and I believe it has legitimate use cases. If we develop them into tools that help assist work. (For example, I’m very intrigued by the stuff that’s happening in the accessibility field.)

    I mostly have problem with the AI business. Ludicruous use cases (shoving AI into places where it has no business in). Sheer arrogance about the sociopolitics in general. Environmental impact. LLMs aren’t good enough for “real” work, but snake oil salesmen keep saying they can do that, and uncritical people keep falling for it.

    And of course, the social impact was just not what we were ready for. “Move fast and break things” may be a good mantra for developing tech, but not for releasing stuff that has vast social impact.

    I believe the AI business and the tech hype cycle is ultimately harming the field. Usually, AI technologies just got gradually developed and integrated to software where they served purpose. Now, it’s marred with controversy for decades to come.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      2 months ago

      If we develop them into tools that help assist work.

      Spoilers: We will not

      I believe the AI business and the tech hype cycle is ultimately harming the field.

      I think this is just an American way of doing business. And it’s awful, but at the end of the day people will adopt technology if it makes them greater profit (or at least screws over the correct group of people).

      But where the Americanized AI seems to suffer most is in their marketing fully eclipsing their R&D. People seem to have forgotten how DeepSeek spiked the football on OpenAI less than a year ago by making some marginal optimizations to their algorithm.

      The field isn’t suffering from the hype cycle nearly so much as it suffers from malinvestment. Huge efforts to make the platform marketable. Huge efforts to shoehorn clumsy chat bots into every nook and cranny of the OS interface. Vanishingly little effort to optimize material consumption or effectively process data or to segregate AI content from the human data it needs to improve.

        • UnderpantsWeevil@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Implicit costs refer to the opportunity costs associated with a firm’s resources, representing the income that could have been earned if those resources were employed in their next best alternative use.

          • Hackworth@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            2 months ago

            I don’t see the relevance here. Inpainting saves artists from time-consuming and repetitive labor for (often) no additional cost. Many generative inpainting models will run locally, but they’re also just included with an Adobe sub.

  • skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    2 months ago

    The reason most web forum posters hate AI is because AI is ruining web forums by polluting it with inauthentic garbage. Don’t be treating it like it’s some sort of irrational bandwagon.

    • Sl00k@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      2 months ago

      Would love an explanation on how I’m in the wrong on reducing my work week from 40 hours to 15 using AI.

      Existing in predatory capitalistic system and putting the blame on those who utilize available tools to reduce the predatory nature of our system is insane.

        • Sl00k@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          My employer is pushing AI usage, if the work is done the work is done. This is the reality we’re supposed to be living in with AI, just conforming to the current predatory system because “AI bad” actively harms more than it helps.

          • petrol_sniff_king@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            1
            ·
            2 months ago

            The current predatory system will raise the limit on the 40 work week if they’re allowed to. 60. 80. You might not even get a weekend. Unions fought for your weekend.

            AI does not fundamentally change this relationship. It is the same predatory system.

    • Valmond@lemmy.world
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      2 months ago

      So cancer cell detection is now bad and those doing it should feel bad?

      The world isn’t black’n white.

          • Swedneck@discuss.tchncs.de
            link
            fedilink
            arrow-up
            1
            ·
            2 months ago

            it’s not in the least confusing lmao, you know damn well what they mean and are just acting confused as a “gotcha”

            • Valmond@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              2 months ago

              If it tries to be smart about that bad and good ai exists, then it’s a very poor take.

              It actually proves my point by showing that everything is not black and white (emg AI has lots of good uses, and also lots of medium uses and also bad uses).

              You also tried to put words in my mouth, that isn’t looking very smart instead of explaining what the metaphor was all about.

      • kadaverin0@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        Don’t be obtuse, you walnut. I’m obviously not equating medical technology with 12-fingered anime girls and plagiarism.

    • absentbird@lemmy.world
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      2 months ago

      When people say this they are usually talking about a very specific sort of generative LLM using unsupervised learning.

      AI is a very broad field with great potential, the improvements in cancer screening alone could save millions of lives over the coming decades. At the core it’s just math, and the equations have been in use for almost as long as we’ve had computers. It’s no more good or bad than calculus or trigonometry.

      • occultist8128@infosec.pub
        link
        fedilink
        arrow-up
        0
        arrow-down
        1
        ·
        2 months ago

        No hope commenting like this, just get ready getting downvoted with no reason. People use wrong terms and normalize it.

          • KombatWombat@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            2 months ago

            Providing a counterexample to a claim is not whataboutism.

            Whataboutism involves derailing a conversation with an ad-hominem to avoid addressing someone’s argument, like what you just did.

              • occultist8128@infosec.pub
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                Yeah, go cry about it. People use AI to help themselves while you’re just being technophobic, shouting ‘AI is bad’ without even saying which AI you mean. And you’re doing it on Lemmy, a tiny techno-bubble. Lmao.

                • kadaverin0@lemmy.dbzer0.com
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  2 months ago

                  No one is crying here aside some salty bitch of a techno-fetishist acting like his hard-on for environmental destruction and making people dumber is something to be proud of.