Chinese EVs subsidized with prison labor and CCP funds to undercut the market and stagnate long-term innovation, what a boon to humanity!
Chinese EVs subsidized with prison labor and CCP funds to undercut the market and stagnate long-term innovation, what a boon to humanity!
We are not in a recession. The problems with wage stagnation are not some temporary hiccup in the economy. It is a systemic problem. Stop conflating the two, complaining that a macroeconomic term with a very specific meaning isn’t defined the way you want it to be. Stop expecting the problem to heal itself if the fed lowers rates or taxes get nudged up or down or whatever. We know how to fix wage stagnation because we have done it before. Regulation. Labor protections. Minimum wage increases. Wage stagnation occurs in the absence of these things, and they can only be done by Congress.
The campaign is not the government. The meme manager will have literally 0 influence on foreign policy.
Changes to foreign policy are more likely to occur if you direct all this energy to the 30%+ of Americans that fully support an unfettered Israeli military, rather than disparage the only US president in recent history to openly criticize Israel during wartime. In the context of US-Israel relations, Biden is waaaay outside the norm in pushing back against Israeli interests.
Even though the law can be circumvented, it nonetheless provides resistance. Traveling to another state, filling out paperwork, paying extra money, etc all provide additional obstacles to overcome. If someone was having an acute mental problem and felt compelled to eat a barrel, a simple few hours delay in acquiring a gun can make all the difference. For someone planning on using a gun for criminal activity, at some point they might just consider employment as an easier alternative if acquiring a gun is too much of a pain.
We have already seen this effect in reverse with regard to immigration. Legal immigration is such a painful crapshoot that people are willing to surrender their fate to cartels as an alternative.
Read again. I have made no such claim, I simply scrutinized your assertions that LLMs lack any internal representations, and challenged that assertion with alternative hypotheses. You are the one that made the claim. I am perfectly comfortable with the conclusion that we simply do not know what is going on in LLMs with respect to human-like capabilities of the mind.
I have a different interpretation of those close calls: we were very very lucky and should not rely on defiance as a mechanism to avoid the apocalypse.
Nor can we assume that they cannot have the same emergent properties.
These cases are interesting tests of our first amendment rights. “Real” CP requires abuse of a minor, and I think we can all agree that it should be illegal. But it gets pretty messy when we are talking about depictions of abuse.
Currently, we do not outlaw written depictions nor drawings of child sexual abuse. In my opinion, we do not ban these things partly because they are obvious fictions. But also I think we recognize that we should not be in the business of criminalizing expression, regardless of how disgusting it is. I can imagine instances where these fictional depictions could be used in a way that is criminal, such as using them to blackmail someone. In the absence of any harm, it is difficult to justify criminalizing fictional depictions of child abuse.
So how are AI-generated depictions different? First, they are not obvious fictions. Is this enough to cross the line into criminal behavior? I think reasonable minds could disagree. Second, is there harm from these depictions? If the AI models were trained on abusive content, then yes there is harm directly tied to the generation of these images. But what if the training data did not include any abusive content, and these images really are purely depictions of imagination? Then the discussion of harms becomes pretty vague and indirect. Will these images embolden child abusers or increase demand for “real” images of abuse. Is that enough to criminalize them, or should they be treated like other fictional depictions?
We will have some very interesting case law around AI generated content and the limits of free speech. One could argue that the AI is not a person and has no right of free speech, so any content generated by AI could be regulated in any manner. But this argument fails to acknowledge that AI is a tool for expression, similar to pen and paper.
A big problem with AI content is that we have become accustomed to viewing photos and videos as trusted forms of truth. As we re-learn what forms of media can be trusted as “real,” we will likely change our opinions about fringe forms of AI-generated content and where it is appropriate to regulate them.
We do not know how LLMs operate. Similar to our own minds, we understand some primitives, but we have no idea how certain phenomenon emerge from those primitives. Your assertion would be like saying we understand consciousness because we know the structure of a neuron.
You seem pretty confident that LLMs cannot have an internal representation simply because you cannot imagine how that capability could emerge from their architecture. Yet we have the same fundamental problem with the human brain and have no problem asserting that humans are capable of internal representation. LLMs adhere to grammar rules, present information with a logical flow, express relationships between different concepts. Is this not evidence of, at the very least, an internal representation of grammar?
We take in external stimuli and peform billions of operations on them. This is internal representation. An LLM takes in external stimuli and performs billions of operations on them. But the latter is incapable of internal representation?
And I don’t buy the idea that hallucinations are evidence that there is no internal representation. We hallucinate. An internal representation does not need to be “correct” to exist.
No. Human evolution is driven primarily by mate selection.
How do hallucinations preclude an internal representation? Couldn’t hallucinations arise from a consistent internal representation that is not fully aligned with reality?
I think you are misunderstanding the role of tokens in LLMs and conflating them with internal representation. Tokens are used to generate a state, similar to external stimuli. The internal representation, assuming there is one, is the manner in which the tokens are processed. You could say the same thing about human minds, that the representation is not located anywhere like a piece of data; it is the manner in which we process stimuli.
My thesis is that we are asserting the lack of human-like qualities in AIs that we cannot define or measure. Assertions should be made on data, not uneasy feelings arising when an LLM falls into the uncanny valley.
I think where you are going wrong here is assuming that our internal perception is not also a hallucination by your definition. It absolutely is. But our minds are embodied, thus we are able check these hallucinations against some outside stimulus. Your gripe that current LLMs are unable to do that is really a criticism of the current implementations of AI, which are trained on some data, frozen, then restricted from further learning by design. Imagine if your mind was removed from all stimulus and then tested. That is what current LLMs are, and I doubt we could expect a human mind to behave much better in such a scenario. Just look at what happens to people cut off from social stimulus; their mental capacities degrade rapidly and that is just one type of stimulus.
Another problem with your analysis is that you expect the AI to do something that humans cannot do: cite sources without an external reference. Go ahead right now and from memory cite some source for something you know. Do not Google search, just remember where you got that knowledge. Now who is the one that cannot cite sources? The way we cite sources generally requires access to the source at that moment. Current LLMs do not have that by design. Once again, this is a gripe with implementation of a very new technology.
The main problem I have with so many of these “AI isn’t really able to…” arguments is that no one is offering a rigorous definition of knowledge, understanding, introspection, etc in a way that can be measured and tested. Further, we just assume that humans are able to do all these things without any tests to see if we can. Don’t even get me started on the free will vs illusory free will debate that remains unsettled after centuries. But the crux of many of these arguments is the assumption that humans can do it and are somehow uniquely able to do it. We had these same debates about levels of intelligence in animals long ago, and we found that there really isn’t any intelligent capability that is uniquely human.
Where is the safety report for the Wuhan wetmarket? You know the one that unequivocally started one viral pandemic? Then, while closed down, we enjoyed a period with no new coronavirus pandemics? And then, shortly after reopening, there was another coronavirus pandemic originating in Wuhan? That wetmarket, you have a report on that one?
Notice that there are methods, data, and peer reviews that I can freely scrutinize. All things your opinion piece lacks.
It is so strange to say that identity should take a back seat to humanism when every historical example of discrimination and dehumanization is based on identity. Identity in those instances is not imposed on oneself, but is used to define the outgroup that is being dehumanized. Identity politics is simply an honest accounting of groups that being descriminated against. When the discrimination ends, we see the group identity evaporate. We need only look at the early 20th century definitions of Caucasian, and the identity politics of Irish and Italian Americans subsequently evaporating when that definition evolved to include all Americans of European decent, to see that identity politics is a reaction to injustice and not the other way around.
For real, we’ve got the first openly pro-union president, we expanded NATO, student loan forgiveness, actual infrastructure funding, the first administration to openly push back against Israel during war time, all of that in only 4 years. He is the most effective president of my lifetime and I am happy to vote for him again.
The real question you are asking is whether inaction is worse than inconsistency. Should we not put out a fire unless we can put out all fires? What you are suggesting is to let something burn for the sake of consistency.
They sell CBD oil with this little droppers for dosing, but when you read the studies the dosage is like a mouthful of oil. It’s like the exact opposite problem of melatonin dosing.