he/him | any

I wrangle code, play bad music, and write things. You might find some of it here.

  • 4 Posts
  • 128 Comments
Joined 2 years ago
cake
Cake day: March 13th, 2024

help-circle






  • I don’t see much to laugh at here myself. Hank may have been a massive fencesitter on AI, but I still think his reaction to Sora’s completely goddamn justified. This shit is going to enable scams, misinformation and propaganda on a Biblical fucking scale, and undermine the credibility of video evidence for good measure.

    No, it’s absolutely justified and I agree with basically everything he says in the video (esp. the title, there is really no reason for technology like this to exist in the hand of the public, or anyone really, there’s zero upsides to it). It’s just funny to me because the video is just so different from his usual calm stuff.

    But honestly, good for him and (hopefully) his community too.


  • After kinda fence-sitting on the topic of AI in general for while, Hank Green is having a mental breakdown on YouTube over Sora2 and it’s honestly pretty funny.

    If you’re the kind of motherfucker who will create SlopTok, you are not the kind of motherfucker who should be in charge of OpenAI.

    Not that anyone should be in charge of that shitshow of a company, but hey!

    Bonus sneer from the comment section:

    Sam Altman in Feb 2015: “Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.”

    Sam Altman in Dec 2015, after co-founding OpenAI: “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

    Sam Altman 4 days ago, on his personal blog: “we are going to have to somehow make money for video generation.”




  • That was one wild read even worse than I was expecting. Holy sexism Batman, the incel to tech pipeline is real.

    “In college, you don’t learn the building skills that you need for a startup,” Tan says of his decision. “You’re learning computer science theory and stuff like that. It’s just not as helpful if you want to go into the workforce.”

    I remember when a large part of the university experience was about meeting people, experiencing freedom from home for the first time before being forced into the 9-5 world, and broadening your horizon in general. But maybe that’s just the European perspective.

    In any case, these people are so fucking startup-brained that it hurts to think about.

    Now 25, Guild dropped out of high school in the 10th grade to continue building a Minecraft server he says generated hundreds of thousands of dollars in profit.

    Serious question: how? Isn’t Minecraft free to play and you can just host servers yourself on your computer? I tried to search up “how to make money off a Minecraft server” and was (of course) met with an endless list of results of LLM slop I could not bear to read more than one paragraph of.

    Amid political upheaval and global conflict, Palantir applicants are questioning whether college still serves the democratic values it claims to champion, York says. “The success of Western civilization,” she argues, “does not seem to be what our educational institutions are tuned towards right now.”

    Yes, because Palantir is such a beacon of defending democratic values and not a techfash shithouse at all.


  • I sometimes feel that I, as someone who also likes retro computing and even deliberately uses old software because it feels familiar and cozy to me, and because it’s often easier to hack and tweak (in the same way that someone would prefer a vintage car they can maintenance themselves, I guess), I get thrown in with these people – and yes, I also find it super hard to put a finger on it.

    I also feel they’re very prominent in the Vim community for the exact same reasons you mentioned. I like Vim, I use it daily and it’s my favorite editor because it’s what I am used to and I know how to tweak it, and I can’t be bothered to use anything else (except Emacs, but only with evil-mode), but fuck me if Vim evangelists aren’t some of the most obnoxious people online.




  • Klarna is one company that boggles my mind. Here in Germany it’s against literally every bank’s TOS to hand out your login data to other people, they can (and do) terminate your account for that. And yet Klarna works by asking for your login data, including a fucking transaction token, to do their thing.

    You literally type your bank login data including an MFA token into a legalized phishing site so they can log into your account and make a transaction for you. And the banks are fine with it. I don’t get it.

    The German Supreme Court even deemed this whole shit as unsafe all the way back in 2016 and said that websites aren’t allowed to offer Klarna as the only payment option because it’s an “unacceptable risk” for the customer, lol.

    Oh, and they of course also scan your account activity while they’re in there, because who’d give up all that sweet data, which we only know because they’ve been slapped with a GDPR violation a few years back for not telling people about it.

    Yet for some reason it is super popular.


  • Our company is currently looking for a new programmer and we’ve interviewed a few so far. I don’t want to generalize but it really seems that a non-negligible part of the younger ones at least tries to use LLMs to make up for a lack or experience, and that really shows.

    I normally don’t like doing programming challenges during an interview because they have little to no real-world connections, but I’ve been throwing small questions around lately just to see what people do, and how they approach them, and there’s a subset of people who will say, “I would ask ChatGPT now” in those scenarios.

    I haven’t met a vibe-coder in real life yet, but I’m afraid it’s only a matter of time.




  • Which AI models, though? Your synthetic text extruder LLMs that can’t accurately surpass humans at anything unless you train them specifically to do that and which are kinda shite even then unless you look at it exactly the right way? Or that fabled brain simulation AI that doesn’t even exist?

    Instead, he prefers to describe future AI systems as a “country of geniuses in a data center,” […] [and] that such systems would need to be “smarter than a Nobel Prize winner across most relevant fields.”

    Ah, “future” AI systems. As in the ones we haven’t built yet, don’t know how to build, and don’t even know whether we can build them. But let’s just feed more shit into Habsburg GPT in the meantime, maybe one will magically pop out.