• aramis87@fedia.io
    link
    fedilink
    arrow-up
    129
    arrow-down
    14
    ·
    11 days ago

    The biggest problem with AI is that they’re illegally harvesting everything they can possibly get their hands on to feed it, they’re forcing it into places where people have explicitly said they don’t want it, and they’re sucking up massive amounts of energy AMD water to create it, undoing everyone else’s progress in reducing energy use, and raising prices for everyone else at the same time.

    Oh, and it also hallucinates.

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      39
      arrow-down
      15
      ·
      11 days ago

      Eh I’m fine with the illegal harvesting of data. It forces the courts to revisit the question of what copyright really is and hopefully erodes the stranglehold that copyright has on modern society.

      Let the companies fight each other over whether it’s okay to pirate every video on YouTube. I’m waiting.

        • selokichtli@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          11 days ago

          Yeah… Nothing to see here, people, go home, work harder, exercise, and don’t forget to eat your vegetables. Of course, family first and god bless you.

      • Electricblush@lemmy.world
        link
        fedilink
        English
        arrow-up
        24
        ·
        edit-2
        11 days ago

        I would agree with you if the same companies challenging copyright (protecting the intellectual and creative work of “normies”) are not also aggressively welding copyright against the same people they are stealing from.

        With the amount of coprorate power tightly integrated with the governmental bodies in the US (and now with Doge dismantling oversight) I fear that whatever comes out of this is humans own nothing, corporations own everything. Death of free independent thought and creativity.

        Everything you do, say and create is instantly marketable, sellable by the major corporations and you get nothing in return.

        The world needs something a lot more drastic then a copyright reform at this point.

        • interdimensionalmeme@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          11 days ago

          In this case they just need to publish the code as a torrent. You wouldn’t setup a crawler if there was all the data in a torrent swarm.

    • Riskable@programming.dev
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      11
      ·
      edit-2
      11 days ago

      They’re not illegally harvesting anything. Copyright law is all about distribution. As much as everyone loves to think that when you copy something without permission you’re breaking the law the truth is that you’re not. It’s only when you distribute said copy that you’re breaking the law (aka violating copyright).

      All those old school notices (e.g. “FBI Warning”) are 100% bullshit. Same for the warning the NFL spits out before games. You absolutely can record it! You just can’t share it (or show it to more than a handful of people but that’s a different set of laws regarding broadcasting).

      I download AI (image generation) models all the time. They range in size from 2GB to 12GB. You cannot fit the petabytes of data they used to train the model into that space. No compression algorithm is that good.

      The same is true for LLM, RVC (audio models) and similar models/checkpoints. I mean, think about it: If AI is illegally distributing millions of copyrighted works to end users they’d have to be including it all in those files somehow.

      Instead of thinking of an AI model like a collection of copyrighted works think of it more like a rough sketch of a mashup of copyrighted works. Like if you asked a person to make a Godzilla-themed My Little Pony and what you got was that person’s interpretation of what Godzilla combined with MLP would look like. Every artist would draw it differently. Every author would describe it differently. Every voice actor would voice it differently.

      Those differences are the equivalent of the random seed provided to AI models. If you throw something at a random number generator enough times you could–in theory–get the works of Shakespeare. Especially if you ask it to write something just like Shakespeare. However, that doesn’t meant the AI model literally copied his works. It’s just doing it’s best guess (it’s literally guessing! That’s how work!).

      • Nate Cox@programming.dev
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        7
        ·
        11 days ago

        The problem with being like… super pedantic about definitions, is that you often miss the forest for the trees.

        Illegal or not, seems pretty obvious to me that people saying illegal in this thread and others probably mean “unethically”… which is pretty clearly true.

        • Riskable@programming.dev
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          4
          ·
          edit-2
          11 days ago

          I wasn’t being pedantic. It’s a very fucking important distinction.

          If you want to say “unethical” you say that. Law is an orthogonal concept to ethics. As anyone who’s studied the history of racism and sexism would understand.

          Furthermore, it’s not clear that what Meta did actually was unethical. Ethics is all about how human behavior impacts other humans (or other animals). If a behavior has a direct negative impact that’s considered unethical. If it has no impact or positive impact that’s an ethical behavior.

          What impact did OpenAI, Meta, et al have when they downloaded these copyrighted works? They were not read by humans–they were read by machines.

          From an ethics standpoint that behavior is moot. It’s the ethical equivalent of trying to measure the environmental impact of a bit traveling across a wire. You can go deep down the rabbit hole and calculate the damage caused by mining copper and laying cables but that’s largely a waste of time because it completely loses the narrative that copying a billion books/images/whatever into a machine somehow negatively impacts humans.

          It is not the copying of this information that matters. It’s the impact of the technologies they’re creating with it!

          That’s why I think it’s very important to point out that copyright violation isn’t the problem in these threads. It’s a path that leads nowhere.

      • Gerudo@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        11 days ago

        The issue I see is that they are using the copyrighted data, then making money off that data.

        • Riskable@programming.dev
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          11 days ago

          …in the same way that someone who’s read a lot of books can make money by writing their own.

      • Mavvik@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 days ago

        This is an interesting argument that I’ve never heard before. Isn’t the question more about whether ai generated art counts as a “derivative work” though? I don’t use AI at all but from what I’ve read, they can generate work that includes watermarks from the source data, would that not strongly imply that these are derivative works?

        • Riskable@programming.dev
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          11 days ago

          If you studied loads of classic art then started making your own would that be a derivative work? Because that’s how AI works.

          The presence of watermarks in output images is just a side effect of the prompt and its similarity to training data. If you ask for a picture of an Olympic swimmer wearing a purple bathing suit and it turns out that only a hundred or so images in the training match that sort of image–and most of them included a watermark–you can end up with a kinda-sorta similar watermark in the output.

          It is absolutely 100% evidence that they used watermarked images in their training. Is that a problem, though? I wouldn’t think so since they’re not distributing those exact images. Just images that are “kinda sorta” similar.

          If you try to get an AI to output an image that matches someone else’s image nearly exactly… is that the fault of the AI or the end user, specifically asking for something that would violate another’s copyright (with a derivative work)?

    • Sl00k@programming.dev
      link
      fedilink
      English
      arrow-up
      6
      ·
      11 days ago

      I see the “AI is using up massive amounts of water” being proclaimed everywhere lately, however I do not understand it, do you have a source?

      My understanding is this probably stems from people misunderstanding data center cooling systems. Most of these systems are closed loop so everything will be reused. It makes no sense to “burn off” water for cooling.

      • lime!@feddit.nu
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        11 days ago

        data centers are mainly air-cooled, and two innovations contribute to the water waste.

        the first one was “free cooling”, where instead of using a heat exchanger loop you just blow (filtered) outside air directly over the servers and out again, meaning you don’t have to “get rid” of waste heat, you just blow it right out.

        the second one was increasing the moisture content of the air on the way in with what is basically giant carburettors in the air stream. the wetter the air, the more heat it can take from the servers.

        so basically we now have data centers designed like cloud machines.

        Edit: Also, apparently the water they use becomes contaminated and they use mainly potable water. here’s a paper on it

    • Sturgist@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      11 days ago

      Oh, and it also hallucinates.

      This is arguably a feature depending on how you use it. I’m absolutely not an AI acolyte. It’s highly problematic in every step. Resource usage. Training using illegally obtained information. This wouldn’t necessarily be an issue if people who aren’t tech broligarchs weren’t routinely getting their lives destroyed for this, and if the people creating the material being used for training also weren’t being fucked…just capitalism things I guess. Attempts by capitalists to cut workers out of the cost/profit equation.

      If you’re using AI to make music, images or video… you’re depending on those hallucinations.
      I run a Stable Diffusion model on my laptop. It’s kinda neat. I don’t make things for a profit, and now that I’ve played with it a bit I’ll likely delete it soon. I think there’s room for people to locally host their own models, preferably trained with legally acquired data, to be used as a tool to assist with the creative process. The current monetisation model for AI is fuckin criminal…

        • Sturgist@lemmy.ca
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          11 days ago

          Ok? If you read what I said, you’ll see that I’m not talking about using ChatGPT as an information source. I strongly believe that using LLMs as a search tool is incredibly stupid…for exactly reasons like it being so very confident when relaying inaccurate or completely fictional information.
          What I was trying to say, and I get that I may not have communicated that very well, was that Generative Machine Learning Algorithms might find a niche as creative process assistant tools. Not as a way to search for publicly available information on your neighbour or boss or partner. Not as a way to search for case law while researching the defence of your client in a lawsuit. And it should never be relied on to give accurate information about what colour the sky is, or the best ways to make a custard using gasoline.

          Does that clarify things a bit? Or do you want to carry on using an LLM in a way that has been shown to be unreliable, at best, as some sort of gotcha…when I wasn’t talking about that as a viable use case?

          • atrielienz@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            5
            ·
            11 days ago

            lol. I was just saying in another comment that lemmy users 1. Assume a level of knowledge of the person they are talking to or interacting with that may or may not be present in reality, and 2. Are often intentionally mean to the people they respond to so much so that they seem to take offense on purpose to even the most innocuous of comments, and here you are, downvoting my valid point, which is that regardless of whether we view it as a reliable information source, that’s what it is being marketed as and results like this harm both the population using it, and the people who have found good uses for it. And no, I don’t actually agree that it’s good for creative processes as assistance tools and a lot of that has to do with how you view the creative process and how I view it differently. Any other tool at the very least has a known quantity of what went into it and Generative AI does not have that benefit and therefore is problematic.

            • Sturgist@lemmy.ca
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              1
              ·
              11 days ago

              and here you are, downvoting my valid point

              Wasn’t me actually.

              valid point

              You weren’t really making a point in line with what I was saying.

              regardless of whether we view it as a reliable information source, that’s what it is being marketed as and results like this harm both the population using it, and the people who have found good uses for it. And no, I don’t actually agree that it’s good for creative processes as assistance tools and a lot of that has to do with how you view the creative process and how I view it differently. Any other tool at the very least has a known quantity of what went into it and Generative AI does not have that benefit and therefore is problematic.

              This is a really valid point, and if you had taken the time to actually write this out in your first comment, instead of “Tell that to the guy that was expecting factual information from a hallucination generator!” I wouldn’t have reacted the way I did. And we’d be having a constructive conversation right now. Instead you made a snide remark, seemingly (personal opinion here, I probably can’t read minds) intending it as an invalidation of what I was saying, and then being smug about my taking offence to you not contributing to the conversation and instead being kind of a dick.

              • atrielienz@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                3
                ·
                11 days ago

                Not everything has to have a direct correlation to what you say in order to be valid or add to the conversation. You have a habit of ignoring parts of the conversation going around you in order to feel justified in whatever statements you make regardless of whether or not they are based in fact or speak to the conversation you’re responding to and you are also doing the exact same thing to me that you’re upset about (because why else would you go to a whole other post to “prove a point” about downvoting?). I’m not going to even try to justify to you what I said in this post or that one because I honestly don’t think you care.

                It wasn’t you (you claim), but it could have been and it still might be you on a separate account. I have no way of knowing.

                All in all, I said what I said. We will not get the benefits of Generative AI if we don’t 1. deal with the problems that are coming from it, and 2. Stop trying to shoehorn it into everything. And that’s the discussion that’s happening here.

                • Sturgist@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  arrow-down
                  1
                  ·
                  11 days ago

                  because why else would you go to a whole other post to “prove a point” about downvoting?
                  It wasn’t you (you claim)

                  I do claim. I have an alt, didn’t downvote you there either. Was just pointing out that you were also making assumptions. And it’s all comments in the same thread, hardly me going to an entirely different post to prove a point.

                  We will not get the benefits of Generative AI if we don’t 1. deal with the problems that are coming from it, and 2. Stop trying to shoehorn it into everything. And that’s the discussion that’s happening here.

                  I agree. And while I personally feel like there’s already room for it in some people’s workflow, it is very clearly problematic in many ways. As I had pointed out in my first comment.

                  I’m not going to even try to justify to you what I said in this post or that one because I honestly don’t think you care.

                  I do actually! Might be hard to believe, but I reacted the way I did because I felt your first comment was reductive, and intentionally trying to invalidate and derail my comment without actually adding anything to the discussion. That made me angry because I want a discussion. Not because I want to be right, and fuck you for thinking differently.

                  If you’re willing to talk about your views and opinions, I’d be happy to continue talking. If you’re just going to assume I don’t care, and don’t want to hear what other people think…then just block me and move on. 👍

    • index@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      11 days ago

      We spend energy on the most useless shit why are people suddenly using it as an argument against AI? You ever saw someone complaining about pixar wasting energies to render their movies? Or 3D studios to render TV ads?

    • kibiz0r@midwest.social
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      11 days ago

      Well, the harvesting isn’t illegal (yet), and I think it probably shouldn’t be.

      It’s scraping, and it’s hard to make that part illegal without collateral damage.

      But that doesn’t mean we should do nothing about these AI fuckers.

      In the words of Cory Doctorow:

      Web-scraping is good, actually.

      Scraping against the wishes of the scraped is good, actually.

      Scraping when the scrapee suffers as a result of your scraping is good, actually.

      Scraping to train machine-learning models is good, actually.

      Scraping to violate the public’s privacy is bad, actually.

      Scraping to alienate creative workers’ labor is bad, actually.

      We absolutely can have the benefits of scraping without letting AI companies destroy our jobs and our privacy. We just have to stop letting them define the debate.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      11 days ago

      And also it’s using machines to catch up to living creation and evolution, badly.

      A but similar to how Soviet system was trying to catch up to in no way virtuous, but living and vibrant Western societies.

      That’s expensive, and that’s bad, and that’s inefficient. The only subjective advantage is that power is all it requires.

    • taladar@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      7
      ·
      11 days ago

      I don’t care much about them harvesting all that data, what I do care about is that despite essentially feeding all human knowledge into LLMs they are still basically useless.

  • kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    23
    ·
    11 days ago

    Idk if it’s the biggest problem, but it’s probably top three.

    Other problems could include:

    • Power usage
    • Adding noise to our communication channels
    • AGI fears if you buy that (I don’t personally)
    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      11 days ago

      Dead Internet theory has never been a bigger threat. I believe that’s the number one danger - endless quantities of advertising and spam shoved down our throats from every possible direction.

      • Fingolfinz@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        11 days ago

        We’re pretty close to it, most videos on YouTube and websites that exist are purely just for some advertiser to pay that person for a review or recommendation

    • JayDee@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      11 days ago

      Could also put up:

      • Massive collections of people are exploited in order to train various AI systems.
      • Machine learning apps that create text or images from prompts are supposed to be supplementary but businesses are actively trying to replace their workers with this software.
      • Machine learning image generation currently has diminishing returns for training as we pump exponentially more content into them.
      • Machine learning text and image generated content self-poisons their generater’s sample pool, greatly diminishing the ability for these systems to learn from real world content.

      There’s actually a much longer list if we expand to talking about other AI systems, like the robot systems we’re currently training to use in automatic warfare. There’s also the angle of these image and text generation systems being used for political manipulation and scams. There’s alot of terrible problems created from this tech.

    • Sl00k@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 days ago

      Power usage

      I’m generally a huge eco guy but on power usage particularly I view this largely as a government failure. We have had to incredible energy resources that the government has chosen not to implement or effectively dismantled.

      It reminds me a lot of how Recycling has been pushed so hard into the general public instead of and government laws on plastic usage and waste disposal.

      It’s always easier to wave your hands and blame “society” than the is to hold the actual wealthy and powerful accountable.

  • DarkCloud@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    11 days ago

    Like Sam Altman who invests in Prospera, a private “Start-up City” in Honduras where the board of directors pick and choose which laws apply to them!

    The switch to Techno-Feudalism is progressing far too much for my liking.

    • nickwitha_k (he/him)@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 days ago

      Techno-Feudalism

      I’ll say it, yet again. It’s just feudalism. “Techno-Feudalism” has nothing different enough to it to differentiate it as even a sub-type of feudalism. It’s just the same thing all over again, using technological advances to improve the ability to monitor and impose control over the populace. Historical feudalists also leveraged technology to cement their rule (plate armor, cavalry, crossbows, cannon, mills, control of literacy, etc).

  • MyOpinion@lemm.ee
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    8
    ·
    11 days ago

    The problem with AI is that it pirates everyone’s work and then repackages it as its own and enriches the people that did not create the copywrited work.

  • RadicalEagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    11 days ago

    I’d say the biggest problem with AI is that it’s being treated as a tool to displace workers, but there is no system in place to make sure that that “value” (I’m not convinced commercial AI has done anything valuable) created by AI is redistributed to the workers that it has displaced.

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      11 days ago

      The system in place is “open weights” models. These AI companies don’t have a huge head start on the publicly available software, and if the value is there for a corporation, most any savvy solo engineer can slap together something similar.

  • Grimy@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    11 days ago

    AI has a vibrant open source scene and is definitely not owned by a few people.

    A lot of the data to train it is only owned by a few people though. It is record companies and publishing houses winning their lawsuits that will lead to dystopia. It’s a shame to see so many actually cheering them on.

  • TheMightyCat@lemm.ee
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    3
    ·
    11 days ago

    No?

    Anyone can run an AI even on the weakest hardware there are plenty of small open models for this.

    Training an AI requires very strong hardware, however this is not an impossible hurdle as the models on hugging face show.

    • CodeInvasion@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      11 days ago

      Yah, I’m an AI researcher and with the weights released for deep seek anybody can run an enterprise level AI assistant. To run the full model natively, it does require $100k in GPUs, but if one had that hardware it could easily be fine-tuned with something like LoRA for almost any application. Then that model can be distilled and quantized to run on gaming GPUs.

      It’s really not that big of a barrier. Yes, $100k in hardware is, but from a non-profit entity perspective that is peanuts.

      Also adding a vision encoder for images to deep seek would not be theoretically that difficult for the same reason. In fact, I’m working on research right now that finds GPT4o and o1 have similar vision capabilities, implying it’s the same first layer vision encoder and then textual chain of thought tokens are read by subsequent layers. (This is a very recent insight as of last week by my team, so if anyone can disprove that, I would be very interested to know!)

      • Riskable@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 days ago

        Would you say your research is evidence that the o1 model was built using data/algorithms taken from OpenAI via industrial espionage (like Sam Altman is purporting without evidence)? Or is it just likely that they came upon the same logical solution?

        Not that it matters, of course! Just curious.

        • CodeInvasion@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          11 days ago

          Well, OpenAI has clearly scraped everything that is scrap-able on the internet. Copyrights be damned. I haven’t actually used Deep seek very much to make a strong analysis, but I suspect Sam is just mad they got beat at their own game.

          The real innovation that isn’t commonly talked about is the invention of Multihead Latent Attention (MLA), which is what drives the dramatic performance increases in both memory (59x) and computation (6x) efficiency. It’s an absolute game changer and I’m surprised OpenAI has released their own MLA model yet.

          While on the subject of stealing data, I have been of the strong opinion that there is no such thing as copyright when it comes to training data. Humans learn by example and all works are derivative of those that came before, at least to some degree. This, if humans can’t be accused of using copyrighted text to learn how to write, then AI shouldn’t either. Just my hot take that I know is controversial outside of academic circles.

    • nalinna@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      11 days ago

      But the people with the money for the hardware are the ones training it to put more money in their pockets. That’s mostly what it’s being trained to do: make rich people richer.

      • Riskable@programming.dev
        link
        fedilink
        English
        arrow-up
        7
        ·
        11 days ago

        This completely ignores all the endless (open) academic work going on in the AI space. Loads of universities have AI data centers now and are doing great research that is being published out in the open for anyone to use and duplicate.

        I’ve downloaded several academic models and all commercial models and AI tools are based on all that public research.

        I run AI models locally on my PC and you can too.

      • TheMightyCat@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        11 days ago

        But you can make this argument for anything that is used to make rich people richer. Even something as basic as pen and paper is used everyday to make rich people richer.

        Why attack the technology if its the rich people you are against and not the technology itself.

  • PostiveNoise@kbin.melroy.org
    link
    fedilink
    arrow-up
    12
    arrow-down
    2
    ·
    11 days ago

    Either the article editing was horrible, or Eno is wildly uniformed about the world. Creation of AIs is NOT the same as social media. You can’t blame a hammer for some evil person using it to hit someone in the head, and there is more to ‘hammers’ than just assaulting people.

    • andros_rex@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      11 days ago

      Eno does strike me as the kind of person who could use AI effectively as a tool for making music. I don’t think he’s team “just generate music with a single prompt and dump it onto YouTube” (AI has ruined study lo fi channels) - the stuff at the end about distortion is what he’s interested in experimenting with.

      There is a possibility for something interesting and cool there (I think about how Chuck Pearson’s eccojams is just like short loops of random songs repeated in different ways, but it’s an absolutely revolutionary album) even if in effect all that’s going to happen is music execs thinking they can replace songwriters and musicians with “hey siri, generate a pop song with a catchy chorus” while talentless hacks inundate YouTube and bandcamp with shit.

      • PostiveNoise@kbin.melroy.org
        link
        fedilink
        arrow-up
        1
        ·
        11 days ago

        Yeah, Eno actually has made a variety of albums and art installations using generative simple AI for musical decisions, although I don’t think he does any advanced programming himself. That’s why it’s really odd to see comments in an article that imply he is really uninformed about AI…he was pioneering generative music 20-30 years ago.

        I’ve come to realize that there is a huge amount of misinformation about AI these days, and the issue is compounded by there being lots of clumsy, bad early AI works in various art fields, web journalism etc. I’m trying to cut back on discussing AI for these reasons, although as an AI enthusiast, it’s hard to keep quiet about it sometimes.

        • jackalope@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 days ago

          Eno is more a traditional algorist than “AI” (by which people generally mean neural networks)

  • Grandwolf319@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    11 days ago

    The biggest problem with AI is that it’s the brut force solution to complex problems.

    Instead of trying to figure out what’s the most power efficient algorithm to do artificial analysis, they just threw more data and power at it.

    Besides the fact of how often it’s wrong, by definition, it won’t ever be as accurate nor efficient as doing actual thinking.

    It’s the solution you come up with the last day before the project is due cause you know it will technically pass and you’ll get a C.

    • TheBrideWoreCrimson@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      11 days ago

      It’s moronic. Currently, decision makers don’t really understand what to do with AI and how it will realistically evolve in the coming 10-20 years. So it’s getting pushed even into environments with 0-error policies, leading to horrible results and any time savings are completely annihilated by the ensuing error corrections and general troubleshooting. But maybe the latter will just gradually be dropped and customers will be told to just “deal with it,” in the true spirit of enshittification.

  • Wren@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    4
    ·
    11 days ago

    The biggest problem with AI is the damage it’s doing to human culture.

  • Beto@lemmy.studio
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    4
    ·
    11 days ago

    And yet, he released his latest album exclusively on Apple Music.

  • HANN@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    11 days ago

    Ollama and stable diffusion are free open source software. Nobody is forcing anybody to use chatGPT

    • afk_strats@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 days ago

      Ollama is FOSS, SD has a proproprietary but permissive, source-available license, but it is not what most people would associate with “open-source”

  • IninewCrow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 days ago

    Technological development and the future of our civilization is in control of a handful of idiots.