On the internet, nobody knows you are Australian.

also https://lemm.ee/u/MargotRobbie

To tell you the truth, I don’t know who I am either. Somebody sincere, perhaps.

But if you ever read this one day, I hope that you are as proud of me, as I am of the person I imagined you to be.

  • 25 Posts
  • 1.05K Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle
  • Instead of blaming people for the lack of housing on market because they are not moving out of their “starter homes” to buy bigger houses they don’t want or can’t afford, wouldn’t the obvious solution be to build more small houses/condos/townhouses?

    There is plenty of empty land everywhere in America, so it’s not like housing is supposed to be some kind of finite resource. The way I see it, this is real estate developers attempting to shift the blame for their own shortcomings to the consumer.









  • Reddit, and by extension, Lemmy, offers the ideal format for LLM datasets: human generated conversational comments, which, unlike traditional forums, are organized in a branched nested format and scored with votes in the same way that LLM reward models are built.

    There is really no way of knowing, much less prevent public facing data from being scraped and used to build LLMs, but, let’s do an thought experiment: what if, hypothetically speaking, there is some particularly individual who wanted to poison that dataset with shitposts in a way that is hard to detect or remove with any easily automate method, by camouflaging their own online presence within common human generated text data created during this time period, let’s say, the internet marketing campaign of a major Hollywood blockbuster.

    Since scrapers do not understand context, by creating shitposts in similar format to, let’s say, the social media account of an A-list celebrity starring in this hypothetical film being promoted(ideally, it would be someone who no longer has a major social media presence to avoid shitpost data dilution), whenever an LLM aligned on a reward model built on said dataset is prompted for an impression of this celebrity, it’s likely that shitposts in the same format would be generated instead, with no one being the wiser.

    That would be pretty funny.

    Again, this is entirely hypothetical, of course.





  • The precedent in this case already exists in Midler v. Ford Motor Co., in which when Academy Award nominated actress and singer Bette Midler sued Ford after Ford hired musical impersonators to sing famous songs for their commercials.

    The court ultimately ruled in favor of Midler, because it was found that Ford gave clear instructions to the impersonating actress to sound as much like Midler as possible, and the ruling was voices, although not copyrightable, still constitutes their distinct identity and is protected against unauthorized use without permission. (Outside of satire, of course, since I doubt someone like Trump would be above suing people for making fun of him.)

    I think Scarlett Johansson has a case here, but it really hinges on whether or not OpenAI actively gave the instruction specifically to impersonate Scarlett’s voice in “Her”, or if they used her voice inside the training data at all, since there is a difference in the “Sky” voice and the voice of Scarlett Johansson.

    But then again, what do I know, I’m just here to shitpost and promote “Barbie”.