trinicorn [comrade/them]

  • 0 Posts
  • 27 Comments
Joined 4 months ago
cake
Cake day: January 16th, 2025

help-circle
  • and my comment said that in >50% of my classes, what it did was genuinely foster learning in the way described. Not all of the evils of the US school system even conflict with this very basic model of learning, and regardless schools aren’t a monolith. A blanket statement about how schools operate in the US isn’t appropriate in this case, because it’s a gross exaggeration of how useless they are and doesn’t apply across the board.

    edit: and to be clear I agree that classes/assignments that only foster learning in theory, are meaningless, but many still do in practice



  • I’m not sure that I agree that the model “only tests are graded” is a good ideal. What is generally thought of as homework, sure, that is counter-productive to grade, but for things like essays, projects, etc. I’m not sure that’s true. There is only so much essay you can write in one sitting, and the practice polishing, restructuring and generally just increased time spent thinking through that essay, that is afforded by it being done at home, is valuable.

    And I think you underestimate what an LLM can (at least in theory) produce. Especially if you let students pick topics or take liberties with structure, etc. at all. What you’re asking of teachers, when you say that students successfully using LLMs to pass their class is an indictment of their coursework, is an obligation to always provide sufficiently novel prompts and questions and such that an “AI” can’t answer well.

    I agree that an LLM probably can’t convincingly synthesize two concepts that were both represented separately in its training data (though I expect they’ll get closer to being able to pass this off for non-complex examples), but what if the synthesis itself was in the training data? In HS and undergrad level courses, how often are the topics at hand really novel enough to rely on that not being the case? Or how often is the syllabus really flexible enough to allow teachers to reframe all assessments into synthesis questions? And as these companies get better at incorporating fresh material, how often will teachers have to completely rethink their coursework to keep up. This isn’t a treadmill that it’s reasonable to expect teachers to get on or condemn them for being imperfect at detecting.

    The problem isn’t that teachers can’t tell, it’s that they can’t prove it. The difference between a student who isn’t really getting it 100% but is trying and one who used AI and the slop it put out doesn’t quite make logical sense is not that cut and dry and they don’t deserve the same grade.

    As a matter of practicality, what you describe may become necessary for serious educational institutions, but I wouldn’t lay that on the teachers or say that it’s ideal in any abstract sense, absent LLMs.







  • I’m seeing a lot of normalization of this (and other bad things) in high schoolers. They weren’t really paying attention to the wider world until a few years ago so anything older than a year or so has basically been around forever and doesn’t need to be questioned. Some will learn to question it as they grow and mature, but few have really yet at their age.

    Basically they seem to take at face value that it produces worthwhile output and is useful and intelligent. tbf this applies to many adults too, but I hate the conflation of “parrots sometimes-relevant, sometimes-real material” with intelligence just because it sounds confident doing it, and I see it constantly



  • downbear

    The whole point is that A) the goal of school assignments isn’t to get the right answer it’s to learn to understand the surrounding concepts and how to get the right answer in a more generalizable way and B) the students aren’t learning anything if its copy pasted from an AI. And C) frankly the LLM doesn’t usually “solve” it. Its outputs are often easily distinguishable, poor answers, that just look good enough at first glance to hit submit.

    What about an LLM producing plausible output (the one thing it’s built to do) in response to a prompt (the question/assignment) actually means the coursework is poorly designed?

    I genuinely want to know your thought process here. Is it just that teachers should be expected to outpace cheating technology or that you genuinely think anything that can convincingly be done by an LLM isn’t worth having a human do it?

    Writing an essay on a topic is not just a way of assessing your knowledge of the topic, it’s great practice for communicating your ideas in a coherent polished form in general. Just because an LLM can write something that sometimes passes for a human-written essay doesn’t mean that essays are useless now…


  • I wonder how leftists in these countries will prepare for an increasingly illiterate working class

    honestly considering the origin story of a lot of AES I don’t think literacy is a prereq.

    attention span I guess might be more of an issue, but I think deteriorating living conditions will make reality harder to ignore. I don’t really think we win by winning over an actual majority of people with reasoned argument, we win by being in the right place at the right time on the right side of declining living conditions. You need an organized core who have at least some solid basis in theory, but the broader movement around that core don’t have to already understand the theory to be on our side, though hopefully they will learn it as they go



  • especially delivery

    you can get some idea what you’re in for if you go in person but with delivery apps these days its like a 40% chance that restaurant doesn’t even exist it seems like, so why would they care about the quality or quantity of the food (or has that ghost kitchen stuff calmed down? I haven’t used those apps in a while)

    I definitely try new spots but if I’m not getting dragged along by friends I usually only go for stuff thats on the cheaper end or has a really good vibe/comes personally recommended. Some of the only new things opening up lately are $20 cocktail bars and $50+ reservation-required restaurants though so might be break time from trying new spots


  • “possibility”. uh huh. sure.

    but as a thought experiment, if he did somehow eliminate federal income tax for that many people (and presumably financed it by some combination of printing money, shutting down major federal programs like medicaid, etc)… Would he just be god emperor for life in the minds of like 70% of americans? I guess the high would wear off pretty quick and its not a trick you can pull twice, but idk, it feels like it might win some people over. maybe enough to overcome the “but the constitution” pleas of the libs (not that it takes much he’s already almost there)

    of course if he did it by making everything cost double which seems to be what he’s proposing then I feel like people wouldn’t react positively besides the usual rubes




  • the sheer number of people I know that still use chrome despite me telling them that chrome is the reason their adblocker doesn’t work as well and doesn’t work at all on youtube blows my mind. They care enough to use an adblocker but then the most intrusive ads (long video ads, midrolls, etc.) they just shrug.

    And thats not to mention the huge number of people that just don’t block any ads. Ads are literally spyware in addition to being annoying, but apparently that’s fine