• FauxLiving@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    2 months ago

    This research is good, valuable and desperately needed. The uproar online is predictable and could possibly help bring attention to the issue of LLM-enabled bots manipulating social media.

    This research isn’t what you should get mad it. It’s pretty common knowledge online that Reddit is dominated by bots. Advertising bots, scam bots, political bots, etc.

    Intelligence services of nation states and political actors seeking power are all running these kind of influence operations on social media, using bot posters to dominate the conversations about the topics that they want. This is pretty common knowledge in social media spaces. Go to any politically charged topic on international affairs and you will notice that something seems off, it’s hard to say exactly what it is… but if you’ve been active online for a long time you can recognize that something seems wrong.

    We’ve seen how effective this manipulation is on changing the public view (see: Cambridge Analytica, or if you don’t know what that is watch ‘The Great Hack’ documentary) and so it is only natural to wonder how much more effective online manipulation is now that bad actors can use LLMs.

    This study is by a group of scientists who are trying to figure that out. The only difference is that they’re publishing their findings in order to inform the public. Whereas Russia isn’t doing us the same favors.

    Naturally, it is in the interest of everyone using LLMs to manipulate the online conversation that this kind of research is never done. Having this information public could lead to reforms, regulations and effective counter strategies. It is no surprise that you see a bunch of social media ‘users’ creating a huge uproar.


    Most of you, who don’t work in tech spaces, may not understand just how easy and cheap it is to set something like this up. For a few million dollars and a small staff you could essentially dominate a large multi-million subscriber subreddit with whatever opinion you wanted to push. Bots generate variations of the opinion that you want to push, the bot accounts (guided by humans) downvote everyone else out of the conversation and, in addition, moderation power can be seized, stolen or bought to further control the conversation.

    Or, wholly fabricated subreddits can be created. A few months prior to the US election there were several new subreddits which were created and catapulted to popularity despite just being a bunch of bots reposting news. Now those subreddits are high in the /all and /popular feeds, despite their moderators and a huge portion of the users being bots.

    We desperately need this kind of study to keep from drowning in a sea of fake people who will tirelessly work to convince you of all manner of nonsense.

    • T156@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      2 months ago

      Conversely, while the research is good in theory, the data isn’t that reliable.

      The subreddit has rules requiring users engage with everything as though it was written by real people in good faith. Users aren’t likely to point out a bot when the rules explicitly prevent them from doing that.

      There wasn’t much of a good control either. The researchers were comparing themselves to the bots, so it could easily be that they themselves were less convincing, since they were acting outside of their area of expertise.

      And that’s even before the whole ethical mess that is experimenting on people without their consent. Post-hoc consent is not informed consent, and that is the crux of human experimentation.

      • thanksforallthefish@literature.cafe
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Users aren’t likely to point out a bot when the rules explicitly prevent them from doing that.

        In fact one user commented that he had his comment calling out one of the bots as a bot deleted by mods for breaking that rule

        • FriendBesto@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          Point there is clear, that even the mods helped the bots manipulate people to a cause/point. This proves the studiy’s point even more. In practice and in the real world.

          Imagine the experiment was allowed to run secretly, it would have changed user’s minds since the study claims that the bots were 3 to 6 times better at manipulating people than a human in different metrics.

          Given that Reddit is a bunch of hive minds, it is obvious that it would have made huge dents. As mods have a tendency to delete or ban anyone who rejects the group think. So mods are also a part of the problem.

    • andros_rex@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      2 months ago

      Regardless of any value you might see from the research, it was not conducted ethically. Allowing unethical research to be published encourages further unethical research.

      This flat out should not have passed review. There should be consequences.

      • deutros@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        If the need was justified big enough and negative impact low enough, it could pass review. The lack of informed consent can be justified with sufficient need and if consent would impact the science. The burden is high but not impossible to overcome. This is an area with huge societal impact so I would consider an ethical case to be plausible.

  • MagicShel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 months ago

    There’s no guarantee anyone on there (or here) is a real person or genuine. I’ll bet this experiment has been conducted a dozen times or more but without the reveal at the end.

    • inlandempire@jlai.lu
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      I’m sorry but as a language model trained by OpenAI, I feel very relevant to interact - on Lemmy - with other very real human beings

    • M137@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Dozens? That’s like saying there are hundreds of ants on earth. I’m very comfortable saying it’s hundreds, thousands, tens of thousands. And I wouldn’t be surprised if it’s hundreds of thousands of times.

    • dzsimbo@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      There’s no guarantee anyone on there (or here) is a real person or genuine.

      I’m pretty sure this isn’t a baked-in feature of meatspace either. I’m a fan of solipsism and Last Thursdayism personally. Also propaganda posters.

      The CMV sub reeked of bot/troll/farmer activity, much like the amitheasshole threads. I guess it can be tough to recognize if you weren’t there to see the transition from authentic posting to justice/rage bait.

      We’re still in the uncanny valley, but it seems that we’re climbing out of it. I’m already being ‘tricked’ left and right by near perfect voice ai and tinkered with image gen. What happens when robots pass the imitation game?

      • tamman2000@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        I think the reddit user base is shifting too. It’s less “just the nerds” than it used to be. The same thing happened to Facebook. It fundamentally changed when everyone’s mom joined…

      • pimento64@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        2 months ago

        We’re still in the uncanny valley, but it seems that we’re climbing out of it. I’m already being ‘tricked’ left and right by near perfect voice ai and tinkered with image gen

        Skill issue

  • Donkter@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    2 months ago

    This is a really interesting paragraph to me because I definitely think these results shouldn’t be published or we’ll only get more of these “whoopsie” experiments.

    At the same time though, I think it is desperately important to research the ability of LLMs to persuade people sooner rather than later when they become even more persuasive and natural-sounding. The article mentions that in studies humans already have trouble telling the difference between AI written sentences and human ones.

    • FourWaveforms@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 months ago

      This is certainly not the first time this has happened. There’s nothing to stop people from asking ChatGPT et al to help them argue. I’ve done it myself, not letting it argue for me but rather asking it to find holes in my reasoning and that of my opponent. I never just pasted what it said.

      I also had a guy post a ChatGPT response at me (he said that’s what it was) and although it had little to do with the point I was making, I reasoned that people must surely be doing this thousands of times a day and just not saying it’s AI.

      To say nothing of state actors, “think tanks,” influence-for-hire operations, etc.

      The description of the research in the article already conveys enough to replicate the experiment, at least approximately. Can anyone doubt this is commonplace, or that it has been for the last year or so?

    • Dasus@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I’m pretty sure that only applies due to a majority of people being morons. There’s a vast gap between the 2% most intelligent, 1/50, and the average intelligence.

      Also please put digital text on white on black instead of the other way around

      • SippyCup@feddit.nl
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        What? Intelligent people get fooled all the time. The NXIVM cult was made up mostly of reasonably intelligent women. Shit that motherfucker selected for intelligent women.

        You’re not immune. Even if you were, you’re incredibly dependent on people of average to lower intelligence on a daily basis. Our planet runs on the average intelligence.

      • angrystego@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        I agree, but that doesn’t change anything, right? Even if you are in the 2% most intelligent and you’re somehow immune, you still have to live with the rest who do get influenced by AI. And they vote. So it’s never just a they problem.

  • Fat Tony@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 months ago

    You know what Pac stands for? PAC. Program and Control. He’s Program and Control Man. The whole thing’s a metaphor. All he can do is consume. He’s pursued by demons that are probably just in his own head. And even if he does manage to escape by slipping out one side of the maze, what happens? He comes right back in the other side. People think it’s a happy game. It’s not a happy game. It’s a fucking nightmare world. And the worst thing is? It’s real and we live in it.

  • LovingHippieCat@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 months ago

    If anyone wants to know what subreddit, it’s r/changemyview. I remember seeing a ton of similar posts about controversial opinions and even now people are questioning Am I Overreacting and AITAH a lot. AI posts in those kind of subs are seemingly pretty frequent. I’m not surprised to see it was part of a fucking experiment.

    • eRac@lemmings.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      This was comments, not posts. They were using a model to approximate the demographics of a poster, then using an LLM to generate a response counter to the posted view tailored to the demographics of the poster.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        edit-2
        2 months ago

        You’re right about this study. But, this research group isn’t the only one using LLMs to generate content on social media.

        There are 100% posts that are bot created. Do you ever notice how, on places like Am I Overreacting or Am I the Asshole that a lot of the posts just so happen to hit all of the hot button issues all at once? Nobody’s life is that cliche, but it makes excellent engagement bait and the comment chain provides a huge amount of training data as the users argue over the various topics.

        I use a local LLM, that I’ve fine tuned, to generate replies to people, who are obviously arguing in bad faith, in order to string them along and waste their time. It’s setup to lead the conversation, via red herrings and other various fallacies to the topic of good faith arguments and how people should behave in online spaces. It does this while picking out pieces of the conversation (and from the user’s profile) in order to chastise the person for their bad behavior. It would be trivial to change the prompt chains to push a political opinion rather than to just waste a person/bot’s time.

        This is being done as a side project, on under $2,000 worth of consumer hardware, by a barely competent progammer with no training in Psychology or propaganda. It’s terrifying to think of what you can do with a lot of resources and experts working full-time.

  • VampirePenguin@midwest.social
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    2 months ago

    AI is a fucking curse upon humanity. The tiny morsels of good it can do is FAR outweighed by the destruction it causes. Fuck anyone involved with perpetuating this nightmare.

    • Tja@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      Damn this AI, posting and doing all this mayhem all by itself on poor unsuspecting humans…

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 months ago

      I disagree. It may seem that way if that’s all you look at and/or you buy the BS coming from the LLM hype machine, but IMO it’s really no different than the leap to the internet or search engines. Yes, we open ourselves up to a ton of misinformation, shifting job market etc, but we also get a suite of interesting tools that’ll shake themselves out over the coming years to help improve productivity.

      It’s a big change, for sure, but it’s one we’ll navigate, probably in similar ways that we’ve navigated other challenges, like scams involving spoofed webpages or fake calls. We’ll figure out who to trust and how to verify that we’re getting the right info from them.

      • zbyte64@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        LLMs are not like the birth of the internet. LLMs are more like what came after when marketing took over the roadmap. We had AI before LLMs, and it delivered high quality search results. Now we have search powered by LLMs and the quality is dramatically lower.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Sure, and we had an internet before the world wide web (ARPANET). But that wasn’t hugely influential until it was expanded into what’s now the Internet. And that evolved into the world wide web after 20-ish years. Each step was a pretty monumental change, and built on concepts from before.

          LLMs are no different. Yes they’re built on older tech, but that doesn’t change the fact that they’re a monumental shift from what we had before.

          Let’s look at access to information and misinformation. The process was something like this:

          1. Physical encyclopedias, newspapers, etc
          2. Digital, offline encyclopedias and physical newspapers
          3. Online encyclopedias and news
          4. SEO and the rise of blog/news spam - misinformation is intentional or negligent
          5. Early AI tools - misinformation from hallucinations is largely also accidental
          6. Misinformation in AI tools becomes intentional

          We’re in the transition from 5 to 6, which is similar to the transition from 3 to 4. I’m old enough to have seen each of these transitions.

          The way people interact with the world is fundamentally different now than it was before LLMs came out, just like the transition from offline to online computing. And just like people navigated the transition to SEO nonsense, people need to navigate he transition to LLM nonsense. It’s quite literally a paradigm shift.

          • zbyte64@awful.systems
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            Enshittification is a paradigm shift, but not one we associate with the birth of the internet.

            On to your list. Why does misinformation appear after the birth of the internet? Was yellow journalism just a historical outlier?

            What you’re witnessing is the “Red Queen hypothesis”. LLMs have revolutionized the scam industry and step 7 is an AI arms race against and with misinformation.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              Why does misinformation appear after the birth of the internet?

              It certainly existed before. Physical encyclopedias and newspapers weren’t perfect, as they frequently followed the propaganda line.

              My point is that a lot of people seem to assume that “the internet” is somewhat trustworthy, which is a bit bizarre. I guess there’s the fallacy that if something is untrustworthy, it won’t get attention, but instead things are given attention if they’re popular, by some definition of “popular” (i.e. what a lot of users want to see, what the platform wants users to see, etc).

              Red Queen hypothesis

              Well yeah, every technological innovation will be used for good and ill. The Internet gave a lot of people a voice who didn’t have it before, and sometimes that was good (really helpful communities) and sometimes that was bad (scam sites, misinformation, etc).

              My point is that AI is a massive step. It can massively increase certain types of productivity, and it can also massively increase the effectiveness of scams and misinformation. Whichever way you look at it, it’s immensely impactful.

    • 13igTyme@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      Todays “AI” is just machine learning code. It’s been around for decades and does a lot of good. It’s most often used for predictive analytics and used to facilitate patient flow in healthcare and understand volumes of data fast to provide assistance to providers, case manager, and social workers. Also used in other industries that receive little attention.

      Even some language learning machines can do good, it’s the shitty people that use it for shitty purposes that ruin it.

      • VampirePenguin@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Sure I know what it is and what it is good for, I just don’t think the juice is worth the squeeze. The companies developing AI HAVE to shove it everywhere to make it feasible, and the doing of that is destructive to our entire civilization. The theft of folks’ work, the scamming, the deep fakes, the social media propaganda bots, the climate raping energy consumption, the loss of skill and knowledge, the enshittification of writing and the arts, the list goes on and on. It’s a deadend that humanity will regret pursuing if we survive this century. The fact that we get a paltry handful of positives is cold comfort for our ruin.

        • 13igTyme@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          The fact that we get a paltry handful of positives is cold comfort for our ruin.

          This statement tells me you don’t understand how many industries are using machine learning and how many lives it saves.

  • ImplyingImplications@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    The ethics violation is definitely bad, but their results are also concerning. They claim their AI accounts were 6 times more likely to persuade people into changing their minds compared to a real life person. AI has become an overpowered tool in the hands of propagandists.

      • TimewornTraveler@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        I mean that’s the point of research: to demonstrate real world problems and put it in more concrete terms so we can respond more effectively

    • ArchRecord@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      To be fair, I do believe their research was based on how convincing it was compared to other Reddit commenters, rather than say, an actual person you’d normally see doing the work for a government propaganda arm, with the training and skillset to effectively distribute propaganda.

      Their assessment of how “convincing” it was seems to also have been based on upvotes, which if I know anything about how people use social media, and especially Reddit, are often given when a comment is only slightly read through, and people are often scrolling past without having read the whole thing. The bots may not have necessarily optimized for convincing people, but rather, just making the first part of the comment feel upvote-able over others, while the latter part of the comment was mostly ignored. I’d want to see more research on this, of course, since this seems like a major flaw in how they assessed outcomes.

      This, of course, doesn’t discount the fact that AI models are often much cheaper to run than the salaries of human beings.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        This, of course, doesn’t discount the fact that AI models are often much cheaper to run than the salaries of human beings.

        And the fact that you can generate hundreds or thousands of them at the drop of a hat to bury any social media topic in highly convincing ‘people’ so that the average reader is more than likely going to read the opinion that you’re pushing and not the opinion of the human beings.

  • justdoitlater@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    2 months ago

    Reddit: Ban the Russian/Chinese/Israeli/American bots? Nope. Ban the Swiss researchers that are trying to study useful things? Yep

    • Ilandar@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      Bots attempting to manipulate humans by impersonating trauma counselors or rape survivors isn’t useful. It’s dangerous.

      • Oniononon@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 months ago

        Humans pretend to be experts infront of eachother and constantly lie on the internet every day.

        Say what you want about 4chan but the disclaimer it had ontop of its page should be common sense to everyone on social media.

          • Oniononon@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            If fake experts on the internet get their jobs taken by the ai, it would be tragic indeed.

            Don’t worry tho, popular sites on the internet are dead since they’re all bots anyway. It’s over.

            • Chulk@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              If fake experts on the internet get their jobs taken by the ai, it would be tragic indeed.

              These two groups are not mutually exclusive

  • MTK@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    Lol, coming from the people who sold all of your data with no consent for AI research

  • conicalscientist@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    This is probably the most ethical you’ll ever see it. There are definitely organizations committing far worse experiments.

    Over the years I’ve noticed replies that are far too on the nose. Probing just the right pressure points as if they dropped exactly the right breadcrumbs for me to respond to. I’ve learned to disengage at that point. It’s either they scrolled through my profile. Or as we now know it’s a literal psy-op bot. Already in the first case it’s not worth engaging with someone more invested than I am myself.

    • skisnow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Yeah I was thinking exactly this.

      It’s easy to point to reasons why this study was unethical, but the ugly truth is that bad actors all over the world are performing trials exactly like this all the time - do we really want the only people who know how this kind of manipulation works to be state psyop agencies, SEO bros, and astroturfing agencies working for oil/arms/religion lobbyists?

      Seems like it’s much better long term to have all these tricks out in the open so we know what we’re dealing with, because they’re happening whether it gets published or not.

    • FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Over the years I’ve noticed replies that are far too on the nose. Probing just the right pressure points as if they dropped exactly the right breadcrumbs for me to respond to. I’ve learned to disengage at that point. It’s either they scrolled through my profile. Or as we now know it’s a literal psy-op bot. Already in the first case it’s not worth engaging with someone more invested than I am myself.

      You put it better than I could. I’ve noticed this too.

      I used to just disengage. Now when I find myself talking to someone like this I use my own local LLM to generate replies just to waste their time. I’m doing this by prompting the LLM to take a chastising tone, point out their fallacies and to lecture them on good faith participation in online conversations.

      It is horrifying to see how many bots you catch like this. It is certainly bots, or else there are suddenly a lot more people that will go 10-20 multi-paragraph replies deep into a conversation despite talking to something that is obviously (to a trained human) just generated comments.

        • FauxLiving@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 months ago

          I think the simplest way to explain it is that the average person isn’t very skilled at rhetoric. They argue inelegantly. Over a long time of talking online, you get used to talking with people and seeing how they respond to different rhetorical strategies.

          In these bot infested social spaces it seems like there are a large number of commenters who just seem to argue way too well and also deploy a huge amount of fallacies. This could be explained, individually, by a person who is simply choosing to argue in bad faith; but, in these online spaces there seem to be too many commenters who seem to deploy these tactics compared to the baseline that I’ve established in my decades of talking to people online.

          In addition, what you see in some of these spaces are commenters who seem to have a very structured way of arguing. Like they’ve picked your comment apart into bullet points and then selected arguments against each point which are technically on topic but misleading in a way.

          I’ll admit that this is all very subjective. It’s entirely based on my perception and noticing patterns that may or may not exist. This is exactly why we need research on the topic, like in the OP, so that we can create effective and objective metrics for tracking this.

          For example, if you could somehow measure how many good faith comments vs how many fallacy-laden comments in a given community there would likely be a ratio that is normal (i.e. there are 10 people who are bad at arguing for every 1 person who is good at arguing and, of those skilled arguers 10% of them are commenting in bad faith and using fallacies) and you could compare this ratio to various online topics to discover the ones that appear to be botted.

          That way you could objectively say that on the topic of Gun Control on this one specific subreddit we’re seeing an elevated ratio of bad faith:good faith scoring commenters and, therefore, we know that this topic/subreddit is being actively LLM botted. This information could be used to deploy anti-bot counter measures (captchas, for example).

          • ibelieveinthehousehippo@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 months ago

            Thanks for replying

            Do you think response time could also indicate that a user is a bot? I’ve had an interaction that I chalked up to someone using AI, but looking back now I’m questioning if there was much human involvement at all just due to how quickly the detailed replies were coming in…

    • acosmichippo@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      why wouldn’t that be the case, all the most persuasive humans are liars too. fantasy sells better than the truth.

      • deathbird@mander.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        I mean, the joke is that AI doesn’t tell you things that are meaningfully true, but rather is a machine for guessing next words to a standard of utility. And yes, lying is a good way to arbitrarily persuade people, especially if you’re unmoored to any social relation with them.

  • TheObviousSolution@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    The reason this is “The Worst Internet-Research Ethics Violation” is because it has exposed what Cambridge Analytica’s successors already realized and are actively exploiting. Just a few months ago it was literally Meta itself running AI accounts trying to pass off as normal users, and not an f-ing peep - why do people think they, the ones who enabled Cambridge Analytica, were trying this shit to begin with. The only difference now is that everyone doing it knows to do it as a “unaffiliated” anonymous third party.

    • tauren@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Just a few months ago it was literally Meta itself…

      Well, it’s Meta. When it comes to science and academic research, they have rather strict rules and committees to ensure that an experiment is ethical.

      • thanksforallthefish@literature.cafe
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        You may wish to reword. The unspecified “they” reads like you think Meta have strict ethical rules. Lol.

        Meta have no ethics whatsoever, and yes I assume you meant universities have strict rules however the approval of this study marks even that as questionable

      • FarceOfWill@infosec.pub
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        The headline is that they advertised beauty products to girls after they detected them deleting a selfie. No ethics or morals at all

    • FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      One of the Twitter leaks showed a user database that effectively had more users than there were people on earth with access to the Internet.

      Before Elon bought the company he was trashing them on social media for being mostly bots. He’s obviously stopped that now that he was forced to buy it, but the fact that Twitter (and, by extension, all social spaces) are mostly bots remains.

  • paraphrand@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    2 months ago

    I’m sure there are individuals doing worse one off shit, or people targeting individuals.

    I’m sure Facebook has run multiple algorithm experiments that are worse.

    I’m sure YouTube has caused worse real world outcomes with the rabbit holes their algorithm use to promote. (And they have never found a way to completely fix the rabbit hole problems without destroying the usefulness of the algorithm completely.)

    The actions described in this article are upsetting and disappointing, but this has been going on for a long time. All in the name of making money.

      • paraphrand@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 months ago

        That’s not at all what I was getting at. My point is the people claiming this is the worst they have seen have a limited point of view and should cast their gaze further across the industry, across social media.

      • CBYX@feddit.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Not sure how everyone hasn’t expected Russia has been doing this the whole time on conservative subreddits…

        • Geetnerd@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          Those of us who are not idiots have known this for a long time.

          They beat the USA without firing a shot.

        • skisnow@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          Russia are every bit as active in leftist groups whipping them up into a frenzy too. There was even a case during BLM where the same Russian troll farm organised both a protest and its counter-protest. Don’t think you’re immune to being manipulated to serve Russia’s long-term interests just because you’re not a conservative.

          They don’t care about promoting right-wing views, they care about sowing division. They support Trump because Trump sows division. Their long-term goal is to break American hegemony.

  • Ledericas@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    as opposed to thousands of bots used by russia everyday on politics related subs.