

… why 7/8?


… why 7/8?


the output is probabilistic not deterministic. By definition, that means it’s not entirely consistent or reproducible, just… maybe close enough.
That isn’t a barrier to guarantees regarding the behavior of a program. The entire field of randomized algorithms is devoted to doing so. The problem is people willfully writing and deploying programs which they neither understand nor can control.


computer, print awawa.


i think it’s when you and a bunch of other vegans live in a group home together and argue over who does the dishes


a lot of this “computational irreducibility” nonsense could be subsumed by the time hierarchy theorem which apparently Stephen has never heard of


He straight up misstates how NP computation works. Essentially he writes that a nondeterministic machine M computes a function f if on every input x, there exists a path of M(x) which outputs f(x). But this is totally nonsense - it implies that a machine M which just branches repeatedly to produce every possible output of a given size “computes” every function of that size.


the ruliad is something in a sense infinitely more complicated. Its concept is to use not just all rules of a given form, but all possible rules. And to apply these rules to all possible initial conditions. And to run the rules for an infinite number of steps
So it’s the complete graph on the set of strings? Stephen how the fuck is this going to help with anything


if two people disagree on a conclusion then either they disagree on the reasoning or the premises.
I don’t think that’s an accurate summary. In Aumann’s agreement theorem, the different agents share a common prior distribution but are given access to different sources of information about the random quantity under examination. The surprising part is that they agree on the posterior probability provided that their conclusions (not their sources) are common knowledge.


Sorry for you and your cat. You did the right thing, but that doesn’t make it any easier.


??????????????????


I’m also a big fan of the concurrency implementation, I wish other languages made it so easy to use green threads & channels.


That o3 does well on frontier math held-out set is impressive, no doubt
I think there is plenty of room for doubt still. elliotglazer on reddit writes:
Epoch’s lead mathematician here. Yes, OAI funded this and has the dataset, which allowed them to evaluate o3 in-house. We haven’t yet independently verified their 25% claim. To do so, we’re currently developing a hold-out dataset and will be able to test their model without them having any prior exposure to these problems.
My personal opinion is that OAI’s score is legit (i.e., they didn’t train on the dataset), and that they have no incentive to lie about internal benchmarking performances. However, we can’t vouch for them until our independent evaluation is complete.
(emphasis mine). So there is good reason to doubt that the “held-out dataset” even exists.


Unfortunately “states of quantum systems form a vector space, and states are often usefully described as linear combinations of other states” doesn’t make for good science fiction compared to “whoa dude, like, the multiverse, man.”


How do you figure? It’s absolutely possible in principle that a quantum computer can efficiently perform computations which would be extremely expensive to perform on a classical computer.


i read the title and was like damn we’re dunking on game engines now?


Wait I know nothing about chemistry but I’m curious now, what are the footguns?


I read one of the papers. About the specific question you have: given a string of bits s, they’re making the choice to associate the empirical distribution to s, as if s was generated by an iid Bernoulli process. So if s has 10 zero bits and 30 one bits, its associated empirical distribution is Ber(3/4). This is the distribution which they’re calculating the entropy of. I have no idea on what basis they are making this choice.
The rest of the paper didn’t make sense to me - they are somehow assigning a number N of “information states” which can change over time as the memory cells fail. I honestly have no idea what it’s supposed to mean and kinda suspect the whole thing is rubbish.
Edit: after reading the author’s quotes from the associated hype article I’m 100% sure it’s rubbish. It’s also really funny that they didn’t manage to catch the COVID-19 research hype train so they’ve pivoted to the simulation hypothesis.


For some reason the previous week’s thread doesn’t show up on the feed for me (and didn’t all week)… nvm, i somehow managed to block froztbyte by accident, no idea how
rationalism is when i pull five numbers out of my ass and multiply them together