Note: this lemmy post was originally titled MIT Study Finds AI Use Reprograms the Brain, Leading to Cognitive Decline and linked to this article, which I cross-posted from this post in !fuck_ai@lemmy.world.

Someone pointed out that the “Science, Public Health Policy and the Law” website which published this click-bait summary of the MIT study is not a reputable publication deserving of traffic, so, 16 hours after posting it I am editing this post (as well as the two other cross-posts I made of it) to link to MIT’s page about the study instead.

The actual paper is here and was previously posted on !fuck_ai@lemmy.world and other lemmy communities here.

Note that the study with its original title got far less upvotes than the click-bait summary did 🤡

  • DownToClown@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 months ago

    The obvious AI-generated image and the generic name of the journal made me think that there was something off about this website/article and sure enough the writer of this article is on X claiming that covid 19 vaccines are not fit for humans and that there’s a clear link between vaccines and autism.

    Neat.

  • Wojwo@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 months ago

    Does this also explain what happens with middle and upper management? As people have moved up the ranks during the course of their careers, I swear they get dumber.

    • ALoafOfBread@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      That was my first reaction. Using LLMs is a lot like being a manager. You have to describe goals/tasks and delegate them, while usually not doing any of the tasks yourself.

      • rebelsimile@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        After being out of being a direct practitioner, I will say all my direct reports are “faster” in programs we use at work than I am, but I’m still waaaaaaaaaay more efficient than all of them (their inefficiencies drive me crazy actually), but I’ve also taken up a lot of development to keep my mind sharp. If I only had my team to manage and not my own personal projects, I could really see regressing a lot.

      • sheogorath@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Fuck, this is why I’m feeling dumber myself after getting promoted to more senior positions and had only had to work in architectural level and on stuff that the more junior staffs can’t work on.

        With LLMs basically my job is still the same.

    • vacuumflower@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      My dad around 1993 designed a cipher better than RC4 (I know it’s not a high mark now, but it kinda was then) at the time, which passed audit by a relevant service.

      My dad around 2003 still was intelligent enough, he’d explain me and my sister some interesting mathematical problems and notice similarities to them and interesting things in real life.

      My dad around 2005 was promoted to a management position and was already becoming kinda dumber.

      My dad around 2010 was a fucking idiot, you’d think he’s mentally impaired.

      My dad around 2015 apparently went to a fortuneteller to “heal me from autism”.

      So yeah. I think it’s a bit similar to what happens to elderly people when they retire. Everything should be trained, and also real tasks give you feeling of life, giving orders and going to endless could-be-an-email meetings makes you both dumb and depressed.

    • TubularTittyFrog@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      that’s the peter principle.

      people only get promoted so far as their inadequacies/incompetence shows. and then their job becomes covering for it.

      hence why so many middle managers primary job is managing the appearance of their own competence first and foremost and they lose touch with the actual work being done… which is a key part of how you actually manage it.

      • Wojwo@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 months ago

        Yeah, that’s part of it. But there is something more fundamental, it’s not just rising up the ranks but also time spent in management. It feels like someone can get promoted to middle management and be good at the job initially, but then as the job is more about telling others what to do and filtering data up the corporate structure there’s a certain amount of brain rot that sets in.

        I had just attributed it to age, but this could also be a factor. I’m not sure it’s enough to warrant studies, but it’s interesting to me that just the act of managing work done by others could contribute to mental decline.

    • socphoenix@midwest.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      I’d expect similar at least. When one doesn’t keep up to date on new information and lets their brain coast it atrophies like any other muscle would from disuse.

  • QuadDamage@kbin.earth
    link
    fedilink
    arrow-up
    1
    ·
    3 months ago

    Microsoft reported the same findings earlier this year, spooky to see a more academic institution report the same results. https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf Abstract for those too lazy to click:

    The rise of Generative AI (GenAI) in knowledge workflows raises questions about its impact on critical thinking skills and practices. We survey 319 knowledge workers to investigate 1) when and how they perceive the enaction of critical thinking when using GenAI, and 2) when and why GenAI affects their effort to do so. Participants shared 936 first-hand examples of using GenAI in work tasks. Quantitatively, when considering both task- and user-specific factors, a user’s task-specific self-confidence and confidence in GenAI are predictive of whether critical thinking is enacted and the effort of doing so in GenAI-assisted tasks. Specifically, higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking. Qualitatively, GenAI shifts the nature of critical thinking toward information verification, response integration, and task stewardship. Our insights reveal new design challenges and opportunities for developing GenAI tools for knowledge work.

  • Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    3 months ago

    Anyone who doubts this should ask their parents how many phone numbers they used to remember.

    In a few years there’ll be people who’ve forgotten how to have a conversation.

    • zqps@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      I don’t see how that’s any indicator of cognitive decline.

      Also people had notebooks for ages. The reason they remembered phone numbers wasn’t necessity, but that you had to manually dial them every time.

      • NateNate60@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 months ago

        And now, since you are the father of writing, your affection for it has made you describe its effects as the opposite of what they really are. In fact, [writing] will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing, which is external and depends on signs that belong to others, instead of trying to remember from the inside, completely on their own. You have not discovered a potion for remembering, but for reminding; you provide your students with the appearance of wisdom, not with its reality. Your invention will enable them to hear many things without being properly taught, and they will imagine that they have come to know much while for the most part they will know nothing. And they will be difficult to get along with, since they will merely appear to be wise instead of really being so.

        —a story told by Socrates, according to his student Plato

    • TubularTittyFrog@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      I already have seen a massive decline personally and observationally (watching other people) in conversation skills.

      Most people now to talk to each other like they are exchanging internet comments. They don’t ask questions, they don’t really engage… they just exchange declaratory sentences. Heck most of the dates I went on the past few years… zero real conversation and just vague exchanges of opinion and commentary. A couple of them went full on streamer, like just ranting at me and randomly stopping to ask me nonsense questions.

      Most of our new employees the past year or two really struggle with any verbal communication and if you approach them physically to converse about something they emailed about they look massively uncomfortable and don’t really know how to think on their feet.

      Before the pandemic I used to actually converse with people and learn from them. Now everyone I meet feels like interacting with a highlight reel. What I don’t understand is why people are choosing this and then complaining about it.

    • starman2112@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 months ago

      The other day I saw someone ask ChatGPT how long it would take to perform 1.5 million instances of a given task, if each instance took one minute. Mfs cannot even divide 1.5 million minutes by 60 to get get 25,000 hours, then by 24 to get 1,041 days. Pretty soon these people will be incapable of writing a full sentence without ChatGPT’s input

      Edit to add: divide by 365.25 to get 2.85 years. Anyone who can tell me how many months that is without asking an LLM gets a free cookie emoji

      • lennivelkant@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        Rough estimate using 30 days as average month would be ~35 months (1050 = 35×30). The average month is a tad longer than 30 days, but I don’t know exactly how much. Without a calculator, I’d guess the total result is closer to 34.5. Just using my own brain, this is as far as I get.

        Now, adding a calculator to my toolset, the average month is 365.2425 d / 12 m = 30.4377 d/m. The total result comes out to about 34.2, so I overestimated a little.

        Also, the total time is 1041.66… which would be more correctly rounded to 1042, but has negligible impact on the redult.

        Edit: I saw someone else went even harder on this, but for early morning performance, I’m satisfied with my work

        • starman2112@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          3 months ago

          🍪

          Pirat gave me an egg emoji, so I baked some more cupcake emojis. Have one for getting it so close without even using a calculator 🧁

      • pirat@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 months ago

        I want a free cookie emoji!

        I didn’t ask an LLM, no, I asked Wikipedia:

        The mean month-length in the Gregorian calendar is 30.436875 days.

        Edit: but since I already knew a year is 365.2425 I could, of course, have divided that by the 12 months of a year to get that number.

        So,

        1041 ÷ 30.436875 ≈ 34 months and…

        0.2019343313 × 30.436875 ≈ 6 days and…

        0.146249999987 × 24 ≈ 3 hours and…

        0.509999999688 × 60 ≈ 30 minutes and…

        0.59999998128 × 60 ≈ 35 seconds and…

        0.9999988768 × 1000 ≈ 999 milliseconds and

        0.9999988768 × 1000000 ≈ 999999 nanoseconds

        34 months + 6d 3h 30m 35s 999ms 999999 ns (or we could call it 36s…)

        Edit: 34 months is better known as 2 years and 10 months.

          • pirat@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 months ago

            Thank you, you really didn’t have to. That cupcake is truly the icing and it’s almost too much! I’ll give you this giant egg of unknown origin: 🥚 in return, as long as you promise to use it for baking and making some more of those cupcakes for whoever else needs or deserves one within the next few days, hours, minutes, seconds, milliseconds and 999999 bananoseconds 🍌

    • Psythik@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      3 months ago

      People don’t memorize phone numbers anymore? Why not? Dialing is so much quicker than searching your contacts for the right person.

      • UntitledQuitting@reddthat.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        This is the furthest thing from my experience lol I can type 2 letters in my phone, see the right name and press call. I haven’t memorised a phone number since before the year 2000* (*hyperbole)

  • surph_ninja@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    And using a calculator isn’t as engaging for your brain as manually working the problem. What’s your point?

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      Seems like you’ve made the point succinctly.

      Don’t lean on a calculator if you want to develop your math skills. Don’t lean on an AI if you want to develop general cognition.

  • FreedomAdvocate@lemmy.net.au
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    What a ridiculous study. People who got AI to write their essay can’t remember quotes from their AI written essay? You don’t say?! Those same people also didn’t feel much pride over their essay that they didn’t write? Hold the phone!!! Groundbreaking!!!

    Academics are a joke these days.

  • trashgarbage78@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 months ago

    what should we do then? just abandon LLM use entirely or use it in moderation? i find it useful to ask trivial questions and sort of as a replacement for wikipedia. also what should we do to the people who are developing this ‘rat poison’ and feeding it to young people’s brains?

    edit: i also personally wouldn’t use AI at all if I didn’t have to compete with all these prompt engineers and their brainless speedy deployments

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      what should we do then?

      i also personally wouldn’t use AI at all if I didn’t have to compete with all these prompt engineers and their brainless speedy deployments

      Gotta argue that your more methodical and rigorous deployment strategy is more cost efficient than guys cranking out big ridden releases.

      If your boss refuses to see it, you either go with the flow or look for a new job (or unionize).

      • paequ2@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        I’m not really worried about competing with the vibe coders. At least on my team, those guys tend to ship more bugs, which causes the fire alarm to go off later.

        I’d rather build a reputation of being a little slower, but more stable and higher quality. I want people to think, “Ah, nice. Paequ2 just merged his code. We’re saved.” instead of, “Shit. Paequ2 just merged. Please nothing break…”

        Also, those guys don’t really seem to be closing tickets faster than me. Typing words is just one small part of being a programmer.

    • orrk@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      Thing is, that “trivial question asking” is part of what causes this phenomenon

    • TubularTittyFrog@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      you should stop using it and use wikipedia.

      being able to pull relevant information out of a larger of it, is a incredibly valuable life skill. you should not be replacing that skill with an AI chatbot

    • GlenRambo@jlai.lu
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      The abstract seems to suggest that in the long run you’ll out perform those prompt engineers.

  • Tracaine@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    3 months ago

    I don’t refute the findings but I would like to mention: without AI, I wasn’t going to be writing anything at all. I’d have let it go and dealt with the consequences. This way at least I’m doing something rather than nothing.

    I’m not advocating for academic dishonesty of course, I’m only saying it doesn’t look like they bothered to look at the issue from the angle of:

    “What if the subject was planning on doing nothing at all and the AI enabled the them to expend the bare minimum of effort they otherwise would have avoided?”

    • renegadespork@lemmy.jelliefrontier.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      I would argue that if you used AI, you still haven’t done any writing.

      I don’t think you can definitely say that you wouldn’t have done it anyway. That’s speculative based on a theoretical situation.

      It’s possible you might have been moved to write if AI never existed, maybe not. But whatever you do write without AI is actually something you made, good or bad. LLM output isn’t.