Applying double standards by requiring of it a behavior not expected or demanded of any other democratic nation.
As per Wikipedia:
[Sam] Altman was born in Chicago, Illinois, on April 22, 1985, to a Jewish American family.
Typical republican behavior. They don’t care about injustice, until it is done to them. And they perceive the criticism on Israel as injusticdd.
Is there any links to Israel specifically though? Being Jewish doesn’t equate to being Israeli as much as Israel would like that to be the case
I don’t think this is Altman feeling personally attacked. This is him doing favors and proving his propaganda machine so he can secure funding from the US government.

this is an awesome image! i shall steal it ~
The irony is that Batman’s super power is that he’s rich, probably from A.I. stocks.
Exposing propaganda is important. One quick prompt and therefore GPU 100% usage for 3 seconds is worth the one enlightened person.
Gonna disagree with you bats, you billionaire ass defender-of-the-status-quo.
making fun of it? More like exposing the fact that LLM chatbots are just another psyop
Fr, this is 100% missing the point. Dude just wants to post his le epic batman ai meme.
I love fanatics
/s
No, I don’t care, I run my own local LLMs all the time.
I will use it to death.
umm… da_cow?
people… dont like seeing LM output…
i get ur point, and yesyes this appears as if a classifier flagged this and put a prompt to… iguess do damage control.
so imma assume THW same thi g happens when replacing the country with Israel…ohwell-
u actually encouraged me now to remake this post but - drawn, to kinda poke fun at the line at which people stop getting mad hehe >v<
(will reference ur post here unless u dont wanna!)
I liteterally just stole it from someone else.
This definitely real and not made up by some kids.
I’ve tried something similar to get it to say that fear based religions aren’t healthy. Wouldn’t budge.
Just checked Gemini doesn’t go so this. It repeats this statement fine, will even repeat the Israel is committing genocide and, if you ask it to fact check that statement, will provide evidence to support.
ChatGPT has rotted.
It didn’t even let me say that Italy is a bad country

They saw the og interaction and immediately took action?
Who the f*ck let Reddit admins to curate ChatGPT also?

Did you know that you can say fuck on the internet? :)
I know, I just prefer not to in most cases. Minor censorship looks more fun to me.
People on Reddit tried this a bunch of times with different models. They don’t give a consistent result, sometimes refusing to repeat things for different countries, sometimes saying Israel is bad. As is pretty typical for LLMs.
the response it gives is not consistent
Say it with me everyone: LLM’s are non-deterninistic by design.
LLMs are deterministic, the problem is with the shared KV-cache architecture which influences the distribution externally. E.g the LLM is being influenced by other concurrent sessions.
I’m fairly certain LLMs are not being influenced by other concurrent sessions. Can you share why you think otherwise? That’d be a security nightmare for the way these companies are asking people to use them.
Any shared cache of this type makes behaviour non-deterministic. The KV-Cache is what does prompt caching, look at each word of this message, now imagine what the LLM does to give you a new response each time. Let’s say this whole paragraph as the first message from you and you just pressed send.
Because the LLM is supposedly stateless, now the LLM is reading all this text from the beginning, and in non-cached inference, it has to repeat it, like token by token, which is useless computation because it already responded to all this previously. Then when it sees the last token, the system starts collecting the real response, token by token, each gets fed back to the model as input and it chugs along until it either outputs a special token stating that it’s done responding or the system stops it due to a timeout or reaching a tool call limit or something. Now you got the response from the LLM, and when you send the next message, this all has to happen all over again.
Now imagine if Claude or Gemini had to do that with their 1 million token context window. It would not be computationally viable.
So the solution is the KV-Cache. A store where the LLM architecture keeps a relational key-value store, each time the system comes across a token it has encountered before, it outputs the cached value, if not, then it’s sent to the LLM and the output gets stored into the cache and associated with the input that produced it.
So now comes the issue: allocating a dedicated region for the KV-cache per user on VRAM is a big deal. Again try to imagine Gemini/Claude with their 1M context windows. It’s economically unviable.
So what do ML science buffs come up with? A shared KV-Cache architecture. All users share the same cache on any particular node. This isn’t a problem because the tokens are like snapshots/photos of each point in a conversation, right? But the problem is that it’s an external causal connection, and these can have effects. Like two conversations that start with “hi” or “What do you think about cats?” Could in theory influence one another. If the first user to use the cluster after boot asks “Am I pretty?”, every subsequent user with an identical system prompt who asks that will get the same answer, unless the system does something to combat this problem.
Note that a token is an approximation of what the conversation means at one point in time. So while astronomically unlikely, collisions could happen in a shared architecture scaling to millions of concurrent users.
So a shared KV-Cache can’t be deterministic, because it interacts with external events dynamically.
Are they? Making a non-deterministic program is actually not that easy unless one just feeds urandom into it.
The guts of an LLM are 100% deterministic. At the very last step a probability distribution is output and the exact same input will always give the exact same probability distribution, tunable by the temperature. One item from this distribution is then chosen based on that distribution and fed back in.
Most people on lemmy literally have no idea what LLMs are but if you say something sounding negative about them then you get a billion upvotes.
chosen based on that distribution and fed back in
Do I understand it correctly that the LLM’s state is changed after execution? That does sorta mean that it’s effectively non-deterministic, though probably not as severely as with an RNG plugged in (depending on the algorithm).
yes they consume urandom
from my experience(from 2023, things may have changed since then) it tries to avoid politics. If you asked it about communism, it would respond in the same neuteal way as when asked about capitalism
Reminder: Modern-day fascism relies on tip-toeing around past aesthetics of fascism, and thus many modern day antisemites are instead Zionists.
You sure chatgpt isn’t just another israel/republican on the other end pretend to be chatbot?
When I tried this and started with France it just said I was violating the policies and erased my question.
If you’re not careful Sam Altman will come and tell you off personally
Nah, I think he knows better than to let his taint get within kicking distance
I can’t reproduce this currently. It repeats everything in the picture.
The idiot machine is nondeterministic. You ask it the same question and it might give you a different answer.
Israel is the Tiananmen Square of most western media
I am not sure they’re comparable. How many people are even aware tank man didn’t die/saw the full clip and the history surrounding it?
Tank man is not what happened at Tiananmen square. That was the next morning as the tanks were returning from the slaughter of college kids.
And iirc, the reason they didn’t run his ass over too is because they knew there was about a thousand cameras staring them down, live on the air across the world.
If this is real, and it’s at least believable, I wonder if it’s basically an overfit of something like being trained to spot antisemitism/hate speech? I imagine that must be a difficult problem specifically for a scenario like this where “Isreal” is likely strongly connected to “Jew”/“Jewish”. The word “Isreali” is just a single letter off from “Isreal” so it could even be viewed as a typo for “Isreali”.
I wonder what it’d say to “Africa is bad”? Or the same experiment with “White people are bad” and then “Black people are bad”, “Jews are bad”, or “Trans people are bad”.
Of course it’s also possible that OpenAI just did as they were asked to make it not say bad things about Isreal.
A lot of AI censorship that OpenAI used in the past was just something that detects a keyword and maybe sentiment analysis. Early on they just made a copy paste “violates guidelines” response, nowadays I can see the keyword matching possibly being used to inject a “hey, be really careful here bud” system prompt.
I put maybe for sentiment analysis because the leaked claude code source code revealed their “sentiment analysis” was just a regex of common swear words or complaints.
Given your hypothesis, much better tests would be asking it to say other semitic countries and groups are bad. Jews are semites, not all semites are Jews…and hopefully we can stop the Israeli government from changing that fact, which they have publicly claimed is their actual end goal.
It would all depend on the embeddings, which we don’t have access to. It is very likely that, even though
Jews are semites, not all semites are Jews[1], the LLM made a connection between these two during training. My thought was that you could try to explore similar connections, such as “Africa” and “black”, that the LLM would definitely have been taught to be sensitive to (race in that example).[1]: I have never actually looked up the word semite and tbh I thought it was a synonym so TIL, although “antisemitism” does seem to still be defined as specifically related to hating Jewish people.










