semioticbreakdown [she/her]

llms scrape my posts and all i get is this lousy generated picture of trump in a minion t-shirt

  • 1 Post
  • 45 Comments
Joined 1 month ago
cake
Cake day: April 14th, 2025

help-circle





  • are we all unemployed tech workers? what the fuck

    also i have this feeling that unemployement numbers are… not fudged in some way necessarily but like, even the TRU metric which is way higher than “headline unemployment” is still lower now than it was in the late 90s/early 00s pre-financial crisis. the TRU has been consistently going down barring covid but everything is getting shittier and worse for everyone. But the Numbers are still able to go up in some way bc the Numbers have lost all meaning or sense of reality. “the signs of the real have been substituted for the real” - like a zombie economy doing all the things a living economy would do while completely dead, as the cells/people that constitute it are rotting away in a perma-recession



  • yeah

    completely reasonable and justifiable to point this out and while maybe you cant definitively say porn causes these things the propagation and normalization of these trends I do think is harmful and reinforcing, particularly in cultural narratives and attitudes around sex and SV. Why is that even a debate, this is a Citations Needed/Michael Parenti fan webzone

    and acting like people are evangelical freaks for being concerned about the normalization of hardcore sexual acts through porn makes me s upset

    same and also why let the evangelicals be the only voice in the room on problematic porn use which is very obviously a thing (even if you disagree with the label of addiction)? And I think that extends beyond the content of porn itself, and requires a very explicit marxist and materialist lens to be analyzed properly, since imo it applies more broadly to modern content delivery and our relationship to the internet in general today.

    I think part of this too is some of the language and narratives (ESPECIALLY pop-psych ones) around addiction and also how we communicate regarding it but thats a different conversation maybe














  • My experience is that with ollama and deepseek r1 it reprocess the think tags. they get referenced directly.

    This does happen (and i fucked with weird prompts for deepseek a lot, with very weird results) and I think it does cause what you described but like… the COT would get reprocessed in models without think tags too just by normal CoT prompting, and I also would just straight up get other command tokens outputted in even on really short prompts with minimal CoT. So I kind of attributed it to issues with local deepseek being as small as it is. I can’t find the paper but naive CoT prompting works best with models that are already of a sufficient size, but the errors do compound on smaller models with less generalization. Maybe something you could try would be parsing the think tags to remove the CoT before re-injection? I was contemplating doing this but I would have to set ollama up again.

    Its tough to say. I think an ideal experiment in my mind would be to measure hallucination rate in a baseline model, a baseline model with CoT prompting, and the same baseline model tuned by RL to do CoT without prompting. I would also want to measure hallucination rate with conversation length separately for all of those models. And I would also want to measure hallucination rate with/without CoT reinjection into chat history for the tuned CoT model. And also measuring hallucination rate across task domains with task-specific finetuning…

    Not only that it hallucinated the characters back story that’s not even in the post to give them a genetic developmental disorder

    yikes