Okay, now hit it again.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit and then some time on kbin.social.
Okay, now hit it again.
Some people are so addicted to anger that they’ll shoot themselves in the foot just so they’ll have something to complain about.
“The gimp” is a character from Pulp Fiction. You’re imagining things and refusing to use a powerful tool in response to that imagined slight.
Maybe to make it absolutely clear “I’m getting rid of my grenades, this isn’t some trick to suicide-bomb you guys.”
And unfortunately, this article is also just a response to media clickbait, not a discussion point it tries to look like
And becomes new clickbait in the process.
It looks like the ladies approached the soldiers, not the other way around. The soldier speaking was polite, and didn’t tell her what to say in response to his “glory to Ukraine.” She could have just said nothing. I’m really not seeing a problem here.
If the women felt threatened they could have simply not approached the Ukrainian soldiers.
Eh, there didn’t seem to be any sort of implied threat or imbalance of power in the little snippet presented here. The old ladies approached the soldiers and asked for a lift, and the soldiers seemed honestly apologetic that they had no room to provide one.
It’s quite interesting seeing the “depoliticization” of the general Russian population having this effect, when the Ukrainians moved in a surprising number seem to be just shrugging and going “new management, I guess.” Will be interesting to see how the occupation goes if it’s long-term.
I’m sure the Ukrainian soldiers are rather busy with important things of their own, but if they’ve got any spare bandwidth it’d be neat if they were able to help organize the Russian civilians a bit and keep this kind of lawlessness suppressed. Heck, if they’re digging in for the long term they may end up needing to provide humanitarian aid for the people who chose to stay behind. That’ll be quite the look.
You’re welcome to try other methods but LLMs seem to be working best so far.
With a decompiler it should be pretty straightforward to automatically check for “hallucinations,” the compiled code is still right there and you can compare the decompiled logic to the original.
The time at which the source code was lost is irrelevant for decompilation, decompilation uses the binary files. Those are the files that are out there being played right now.
Until recently decompilers tended to produce rough and useless code for the most part, but I’m looking forward to seeing what modern LLMs will bring to decompilation. They could be trained specifically for the task.
Shush, this is an opportunity for people to dump on Microsoft, if you take it from them they’ll turn on you.
particularly for companies entrusted with vast amounts of sensitive personal information.
I nodded along to most of your comment but this cast a discordant and jarring tone over it. Why particularly those companies? The CrowdStrike failure didn’t actually result in sensitive information being deleted or revealed, it just caused computers to shut down entirely. Throwing that in there as an area of particular concern seems clickbaity.
One of the background details I liked in Ghost in the Shell was how the high-end data analysts and programmers employed by the government did their work using cybernetic hands whose fingers could separate into dozens of smaller fingers to let them operate keyboards extremely quickly. They didn’t use direct cybernetic links because that was a security vulnerability for their brains.
Different countries have a variety of very different approaches to appointing judges, and some of those methods are not nearly as easy to corrupt as the American system.
Americans are subject to a lot of cultural indoctrination about how their system is the “greatest democracy in the world,” “leader of the free world,” and other such platitudes. It’s really not the case, though. America’s system is one of the earliest that’s still around, and unfortunately that means it’s got a lot of problems that have been corrected in democracies that were founded later on but have remained embedded in America’s.
Doesn’t help that America has a somewhat problematic electorate as well.
Not necessarily. Curation can also be done by AIs, at least in part.
As a concrete example, NVIDIA’s Nemotron-4 is a system specifically intended for generating “synthetic” training data for other LLMs. It consists of two separate LLMs; Nemotron-4 Instruct, which generates text, and Nemotron-4 Reward, which evaluates the outputs of Instruct to determine whether they’re good to train on.
Humans can still be in that loop, but they don’t necessarily have to be. And the AI can help them in that role so that it’s not necessarily a huge task.
It means that even if AI is having more environmental impact right now, there’s no reason to say “you can’t improve it that much.” Maybe you can improve it. As I said previously, a lot of research is being done on exactly that - methods to train and run AIs much more cheaply than it has so far. I see developments along those lines being discussed all the time in AI forums such as /r/localllama.
Much like with blockchains, though, it’s really popular to hate AI and “they waste enormous amounts of electricity” is an easy way to justify that. So news of such developments doesn’t spread easily.
Funny you should mention blockchains. Ethereum, the second-largest blockchain after Bitcoin, switched from proof-of-work to a proof-of-stake validation system two and a half years ago. That cut its energy use by 99.95%. The “blockchains are inherently a huge waste of energy” narrative is just firmly lodged in the popular view of them now, though, despite it being long proven false.
A lot of work has been going into making AIs more energy efficient, both in training and in inference stages. Electricity costs money, so obviously everyone’s interested in more efficient AIs. That makes them more profitable.
The term “model collapse” gets brought up frequently to describe this, but it’s commonly very misunderstood. There actually isn’t a fundamental problem with training an AI on data that includes other AI outputs, as long as the training data is well curated to maintain its quality. That needs to be done with non-AI-generated training data already anyway so it’s not really extra effort. The research paper that popularized the term “model collapse” used an unrealistically simplistic approach, it just recycled all of an AI’s output into the training set for subsequent generations of AI without any quality control or additional training data mixed in.
Not in every way. They’re cheaper and faster.