• 1 Post
  • 22 Comments
Joined 8 months ago
cake
Cake day: February 12th, 2025

help-circle


  • I appreciate that. My sister and friends have shown up for me in a huge way these past few days. I’m someone who struggles with social anxiety so my circle is small and I’m not as close as I’d like to be with them, but they’ve treated me like family through this. I don’t think I’d be able to cope healthily without them


  • Thank you. I don’t think we’re gonna be able to work things out. I have still stood firm in my needs around the yelling thing and wanting to work it over with a therapist, and that led her to say that there is definitely no pause. So it’s over, but I feel a lot more equipped to handle it after all the advice in this thread. This was my first serious relationship and I’m terrified to put myself back out there in the future. I’ve got a lot of pieces I need to pick up but I feel better about doing that now.






  • Yes I worried about making this post in case people think I want to be told I’m right. I don’t want that at all, because I can’t possibly clue you all in on every detail and nuance. And I don’t want to paint her out unfairly. I just needed an outlet.

    I appreciate your outlook on what I wrote. It is very complicated and I was just feeling overwhelmed, so I am thankful for you and everybody else who’s commented for helping me reflect on it all









  • The machine learning models which came about before LLMs were often smaller in scope but much more competent. E.g. image recognition models, something newer broad “multimodal” models struggle with; theorem provers and other symbolic AI applications, another area LLMs struggle with.

    The modern crop of LLMs are juiced up autocorrect. They are finding the statistically most likely next token and spitting it out based on training data. They don’t create novel thoughts or logic, just regurgitate from their slurry of training data. The human brain does not work anything like this. LLMs are not modeled on any organic system, just on what some ML/AI researchers assumed was the structure of a brain. When we “hallucinate logic” it’s part of a process of envisioning abstract representations of our world and reasoning through different outcomes; when an LLM hallucinates it is just creating what its training dictates is a likely answer.

    This doesn’t mean ML doesn’t have a broad variety of applications but LLMs have gotta be one of the weakest in terms of actually shifting paradigms. Source: software engineer who works with neural nets with academic background in computational math and statistical analysis