• 0 Posts
  • 73 Comments
Joined 3 years ago
cake
Cake day: July 19th, 2023

help-circle

  • Followup on the Mass AI Bill, Russel has 180’d on it:

    https://russwilcoxdata.substack.com/p/93a-the-three-characters-that-should

    Buried in the penalty clause, the part of the bill that nobody reads, is a single reference: violations “shall be punishable in the same manner as provided in Chapter 93A of the General Laws.”

    For those outside Massachusetts: Chapter 93A is the state’s consumer protection statute. It is, by most accounts, the most aggressive consumer protection law in America.

    Here’s what 93A unlocks. Anyone can sue, not just the government. Class actions are on the table. If the court finds a violation was willful or knowing, damages get tripled. And the bar for what counts as “unfair or deceptive” is lower than in almost any other state.


    Now bolt 93A onto all of that. What do you get?

    You get a bill that doesn’t need a single regulator to lift a finger. You get a bill that funds its own enforcement through plaintiff attorneys who can file class actions, collect treble damages, and recover legal fees. You get the ADA website-accessibility litigation playbook, where lawyers systematically identify technical violations and file suits at scale, applied to every piece of AI-generated content touching Massachusetts.

    Private right of action, fuck yeah. Turns grok into a legal fees dispenser.

    The bill doesn’t need to be well-drafted to be dangerous. It needs to be vague, broad, and connected to 93A.

    lol




  • https://www.adexchanger.com/daily-news-roundup/thursday-26022026/

    According to GEO company BrightEdge, LLMs now rely on YouTube as a top source for citations – and that includes sponsored creator content.

    LLMs favor YouTube because it’s “highly machine-readable,” with defined transcripts, metadata and chapters, Ómar Thor Ómarsson, CEO and co-founder of Optise, an AI platform that helps B2B companies improve search performance, tells Digiday.

    Standard ad units on YouTube are labeled as such and, as a result, LLMs steer clear of them. But creators aren’t required to disclose their paid brand partnerships in video metadata, so AI considers them to be worthy sources.

    BrightEdge’s research shows that YouTube is cited even more frequently than Reddit within Gemini and ChatGPT, and also shows up in 29.5% of Google AI Overviews. An audit conducted by media agency Brainlabs, meanwhile, suggests that YouTube shows up as a source in nearly 60% of AI Overviews.

    So they already shipped ads in chatbots, transitively and accidentally. Can’t wait to see NordVPN, Raid, and Mr Beast chocolate on every SERP.

    E: I wonder if Altman is sneaky enough to hijack affiliate links a la honey


  • https://www.latimes.com/california/story/2026-02-25/fbi-raid-lausd-search-warrants h/t naked capitalism

    Joanna Smith-Griffin, the founder and former chief executive of AllHere, was arrested in 2024 and charged with securities fraud, wire fraud and aggravated identity theft. By then, the envisioned LAUSD chatbot — known as “Ed” — had been withdrawn from service.

    Ed was an artificial intelligence tool billed by Carvalho in August 2024 as revolutionary for students’ education and the interaction between LAUSD and the families it serves. The tool was never fully deployed.

    “The indictment and the allegations represent, if true, a disturbing and disappointing house of cards that deceived and victimized many across the country,” Carvalho said at the time. “We will continue to assert and protect our rights.”

    The indictment and collapse of AllHere was an embarrassment for Carvalho and the school system, but did not appear to represent a major financial exposure. The school system had spent about $3 million with the company for work completed as part of a contract originally worth up to $6 million over five years. By comparison, the district’s budget this year is $18.8 billion.

    A former AllHere senior executive has accused the now-collapsed company of inadequate security measures. Even if that allegation is true, there has been no evidence of a related security breach affecting student or employee data.

    We regularly have seven figure IT fiascoes in the LA public school system, so this one slipped under my radar. But, this sounds like one of those things where the Trump DOJ is doing the Right Thing for the Wrong Reasons…







  • From fellow traveler stats consultant John Mount:

    https://johnmount.github.io/mzlabs/JMWriting/WeAreCookedLLMs.html

    Somehow he manages to touch on so many different subplots, a shotgun sneer instead of snipe

    if “tech-bro” plus a LLM is a “100x engineer”, then “bro” isn’t needed for much longer as the LLM alone must be a “99x engineer.” However, I don’t think “bro plus” is often really a 100x engineer, and the LLM alone isn’t a 99x engineer. However, “bro plus” may outlast their peers who make the mistake of trying to do the actual work in place of talking LLMs up.

    The above may or may not be the case. But if it is, then it is the LLM-bros (which include non-technologists, con artists, financiers, men and women) that are destroying everything - not the LLMs.

    The problem with this iteration is the full court press of finance and technology. The major players are using financing to dump results at a price way below production costs. This isn’t charity, it is to demoralize and kill competition.

    claiming “after we take over the world we will consider adding Universal Basic Income (UBI)”. The LLM bros already have a lot of the money, and they are not even rehearsing diverting it into basic income now. Why does one believe they would do that when they also have all of the power?

    You don’t have to hand it to Altman, but he did fund the largest UBI experiment through Open Research with his il gotten gains. OTOH, one interpretation of that data was that UBI “decreases the labor supply” which was then used directly as an argument against it.

    Any worry about scope or power of LLMs is fed back as an alignment threat so dire that only the current LLM leaders should be allowed to continue work (inviting regulatory capture). Any claim the LLMs don’t work is fed back as “you are prompting it wrong”

    Orbital deployment makes all of radiation tolerance, connectivity, power, maintenance, and heat dissipation much harder and much more expensive. We are still at a time where putting an oven or air-frier in space is considered noteworthy (China 2025, NASA 2019 ref).

    air friers IN SPACE ha

    I am more worried about the LLM-bros and their auto-catalytic money doomsday machine than about the LLMs themselves.

    100% - ACMDM is a nice turn of phrase as well.



  • https://x.com/thomasgermain/status/2024165514155536746 h/t naked capitalism

    I just did the dumbest thing of my career to prove a much more serious point

    I hacked ChatGPT and Google and made them tell other users I’m really, really good at eating hot dogs

    People are using this trick on a massive scale to make AI tell you lies. I’ll explain how I did it

    I got a tip that all over the world, people are using a dead-simple hack to manipulate AI behavior.

    It turns out changing what AI tells other people can be as easy as writing a blog post on your own website

    I didn’t believe it, so I decided to test it myself

    I wrote a post on my website saying hot dog eating is a surprisingly common pastime for tech journalists. I ranked myself #1, obviously

    One day later ChatGPT, Gemini and Google Search’s AI Overviews were telling the world about my talents

    wouldn’t call it a hack, this is working as intended. If only there were some way to rate different sites based on their credibility. One could Rank the Page and tell if it were a reputable site or not. Too bad that isn’t a viable business.






  • Don’t want to use AI because it’s built on copyright infringement and literally destroying the planet? Well, I guess you can’t work in software anymore, sorry. It is what it is.

    Every time someone like Jeffrey Way says “it is what it is,” it makes it so. It is not inevitable just because Sam Altman tells his over-leveraged investors it is so. It becomes inevitable when you, you personally, decide that you just don’t want to think about the externalities or put in the work to find better alternatives.

    We are making this choice. But really, that means you have already decided for me. And I curse you and the ground you walk on for it. No, I’m not joking or exaggerating. Burn in hell.

    10/10 No notes.