• 7 Posts
  • 69 Comments
Joined 2 years ago
cake
Cake day: July 19th, 2023

help-circle
  • What a deeply dishonorable lawsuit. The complaint is essentially that Disney and Universal deserve to be big powerful movie studios that employ and systematically disenfranchise “millions of” artists (p8).

    Disney claims authorship over Darth Vader (Lucas) and Yoda (Oz), Elsa and Ariel (Andersen), folk characters Aladdin, Mulan, and Snow White; Lightning McQueen & Buzz Lightyear (Lasseter et al), Sully (Gerson & Stanton), Iron Man (Lee, Kirby, et al), and Homer Simpson (Groening). Disney not only did not design or produce any of these characters, but Disney purchased those rights. I will give Universal partial credit for not claiming to invent any of their infamous movie monsters, but they do claim to have created Shrek (Stieg). Still, this is some original-character-do-not-steal snottiness; these avaricious executives and attorneys appropriated art from artists and are claiming it as their own so that they can sue another appropriator.

    Here is a sample of their attitude, p16 of the original complaint:

    Disney’s copyright registrations for the entertainment properties in The Simpsons franchise encompass the central characters within.

    See, they’re the original creator and designated benefactor, because they have Piece of Paper, signed by Government Authority, and therefore they are Owner. Who the fuck are Matt Groening or Tracey Ullman?

    I will not contest Universal’s claim to Minions.

    One weakness of the claim is that it’s not clear whether Midjourney infringes, Midjourney’s subscribers infringe, or Midjourney infringes when collaborating with its subscribers. It seems like they’re going to argue that Midjourney commits the infringing act, although p104 contains hedges that will allow Disney to argue either way. Another weakness is the insistence that Midjourney could filter infringing queries, but chooses not to; this is a standard part of amplifying damages in copyright claims but might not stand up under scrutiny since Midjourney can argue that it’s hard to e.g. tell the difference between infringing queries and parodic or satirical queries which infringe but are permitted by fair use. On the other hand, this lawsuit could be an attempt to open a new front in Disney’s long-standing attempt to eradicate fair use.

    As usual, I’m not defending Midjourney, who I think stand on their own demerits. But I’m not ever going to suck Disney dick given what they’ve done to the animation community. I wish y’all would realize the folly of copyright already.



  • I’m gonna be polite, but your position is deeply sneerworthy; I don’t really respect folks who don’t read. The article has quite a few quotes from neuroscientist Anil Seth (not to be confused with AI booster Anil Dash) who says that consciousness can be explained via neuroscience as a sort of post-hoc rationalizing hallucination akin to the multiple-drafts model; his POV helps deflate the AI hype. Quote:

    There is a growing view among some thinkers that as AI becomes even more intelligent, the lights will suddenly turn on inside the machines and they will become conscious. Others, such as Prof Anil Seth who leads the Sussex University team, disagree, describing the view as “blindly optimistic and driven by human exceptionalism.” … “We associate consciousness with intelligence and language because they go together in humans. But just because they go together in us, it doesn’t mean they go together in general, for example in animals.”

    At the end of the article, another quote explains that Seth is broadly aligned with us about the dangers:

    In just a few years, we may well be living in a world populated by humanoid robots and deepfakes that seem conscious, according to Prof Seth. He worries that we won’t be able to resist believing that the AI has feelings and empathy, which could lead to new dangers. “It will mean that we trust these things more, share more data with them and be more open to persuasion.” But the greater risk from the illusion of consciousness is a “moral corrosion”, he says. “It will distort our moral priorities by making us devote more of our resources to caring for these systems at the expense of the real things in our lives” – meaning that we might have compassion for robots, but care less for other humans.

    A pseudoscience has an illusory object of study. For example, parapsychology studies non-existent energy fields outside the Standard Model, and criminology asserts that not only do minds exist but some minds are criminal and some are not. Robotics/cybernetics/artificial intelligence studies control loops and systems with feedback, which do actually exist; further, the study of robots directly leads to improved safety in workplaces where robots can crush employees, so it’s a useful science even if it turns out to be ill-founded. I think that your complaint would be better directed at specific AGI position papers published by techbros, but that would require reading. Still, I’ll try to salvage your position:

    Any field of study which presupposes that a mind is a discrete isolated event in spacetime is a pseudoscience. That is, fields oriented around neurology are scientific, but fields oriented around psychology are pseudoscientific. This position has no open evidence against it (because it’s definitional!) and aligns with the expectations of Seth and others. It is compatible with definitions of mind given by Dennett and Hofstadter. It immediately forecloses the possibility that a computer can think or feel like humans; at best, maybe a computer could slowly poorly emulate a connectome.




  • Your understanding is correct. It’s worth knowing that the matrix-multiplication exponent actually controls multiple different algorithms. I stubbed a little list a while ago; important examples include several graph-theory algorithms as well as parsing for context-free languages. There’s also a variant of P vs NP for this specific problem, because we can verify that a matrix is a product in quadratic time.

    That Reddit discussion contains mostly idiots, though. We expect an iterative sequence of ever-more-complicated algorithms with ever-slightly-better exponents, approaching quadratic time in the infinite limit. We also expected a computer to be required to compute those iterates at some point; personally I think Strassen’s approach only barely fits inside a brain and the larger approaches can’t be managed by humans alone.



  • Read it to the end and then re-read 2009’s The Gervais Principle. I hope Ed eventually comes back to Rao’s rant because they complement each other perfectly; Zitron’s Business Idiot is Rao’s Clueless! What Rao brings to the table is an understanding that Sociopaths exist and steer the Clueless, and also that the ratio of (visible) Clueless to Sociopaths is an indication of the overall health of an (individual) business; Zitron’s argument is then that we are currently in an environment (the “Rot Economy” in his writing) which is characterized by mostly Clueless business leaders.

    Then re-read Doctorow’s 2022 rant Social Quitting, which introduced “enshittification”, an alternate understanding of Rao’s process. To Rao, a business pivots from Sociopath to Clueless leadership by mere dilution, but for Doctorow, there’s a directed market pressure which eliminates (or M&As) any businesses not willing to give up some Sociopathy in favor of the more generally-accepted Clueless principles. Concretely relevant to this audience, note how Sociopathic approaches to cryptocurrency-oriented banking have failed against Clueless GAAP accounting, not just at the regulatory level but at the level of handshakes between small-business CEOs.

    Somebody could start a new flavor of Marxism here, one which (to quote an old toot of mine @corbin@defcon.social that I can’t find) starts by understanding that management is a failed paradigm of production and that quotes all of these various managers (Galloway, Rao, and Zitron were all management bros at one point, as were their heroes Scott Adams and Mike Judge) as having a modicum of insight cloaked in MBA-speak.



  • I’ve been giving professional advice about system administration directly to CEOs and CTOs of startups for over half a decade. They’ve all asked about AI one way or another. While some of my previous employers have had good reasons to use machine learning, none of the businesses I’ve worked with in the past half-decade have had any use for generative AI products, including startups whose entire existence was predicated on generative AI.

    Don’t sign up for a dick-measuring contest without measuring yourself first.






  • A lot of court documents are sealed or redacted, so I can’t quite get at all the details. Nonetheless here’s what I’ve got so far:

    • Chrome is just the browser, including Chromium, but not ChromiumOS (a Gentoo fork, basically) or ChromeOS (the branded OS on Chromebooks)
    • Chrome is unaffordable because it was quite expensive to build and continues to be a maintenance burden
    • The government is vaguely aware that forcing a sale of Chrome could be adverse for the market but the court hasn’t said anything on the topic yet
    • Via filing from Apple, the court is aware that Firefox materially depends on Google, although they haven’t done much beyond allow Apple to file as amicus

    The court hasn’t cracked open AMD v Intel yet, where it was found that a cash remedy would be better than punishing the ongoing business concerns of a duopoly, but it would be one possible solution: instead of selling Chrome, Google would have to pay its competitors a lump sum and change their business practices somewhat.

    I am genuinely not sure what happens to “the browser market”, as it were. The Brave and Safari teams are relatively small because they make tweaks on top of an existing browser core; the extreme propagation of Electron suggests that once a browser is written, it does not need to be written again. The court may find browsers to be a sort of capital which is worth a lot of money on its own but not expensive to maintain. This would destroy Mozilla along with Google!



  • Today on the orange site, an AI bro is trying to reason through why people think he’s weird for not disclosing his politics to people he’s trying to be friendly with. Previously, he published a short guide on how to talk about politics, which — again, very weird, no possible explanation for this — nobody has adopted. Don’t worry, he’s well-read:

    So far I’ve only read Harry Potter and The Methods of Rationality, but can say it is an excellent place to start.

    The thread is mostly centered around one or two pearl-clutching conservatives who don’t want their beliefs examined:

    I find it astonishing that anyone would ask, [“who did you vote for?”] … In my social circle, anyway, the taboo on this question is very strong.

    To which the top reply is my choice sneer:

    In my friend group it’s clear as day: either you voted to kill and deport other people in the friend group or you didn’t. Pretty obvious the group would like to know if you’re secretly interested in their demise.



  • Yeah, as somebody in the USA, I think that both you and @gerikson@awful.systems are pearl-clutching over laboratory conditions while ignoring the other, more serious safety problems being addressed; the presentation was not exaggerating when they were talking about the lifesaving impact of gender-affirming therapy. Last thread, you sheepishly admitted that part of the synthesis is complicated by criminalization and over-regulation; this thread, I’d like a sheepish admission that about a third of the USA (by population) suffers from restrictions on their reproductive rights.

    Like, yes, you shouldn’t brew your own high-proof alcohol at home, because you can go blind from methanol poisoning. But also, there was a time in the USA when high-proof alcohol was over-regulated, and it incentivized a lot of people to homebrew.