You still had a 4GB memory limit for processes, as well as a total memory limit of 64GB. Especially the first one was a problem for Java apps before AMD introduced 64bit extensions and a reason to use Sun servers for that.
Intel PAE if the answer, but it still came with other issues, so 64 was still the better answer.
Also the entire article comes down to simple math.
Bits is the number of digits.
So like a 4 digit number maxes out at 9999 but an 8 digit number maxes out at 99 999 999
So when you double the number of digits, the max size available is exponential. 10^4 bigger in this case. It just sounds small because you’re showing that the exponent doubles.
It was actually 3gb because operating systems have to reserve parts of the memory address space for other things. It’s more difficult for all 32bit operating systems to address above 4gb just most implemented additional complexity much earlier because Linux runs on large servers and stuff. Windows actually had a way to switch over to support it in some versions too. Probably the NT kernels that where also running on servers.
A quick skim of the Wikipedia seems like a good starting point for understanding the old problem.
Finding the reason for its existence from a credible source isn’t as easy, however. If you’re fine with an explanation from StackOverflow, you can infer that it’s there because some programs treat pointers as signed integers and die horribly when anything above 7FFFFFFF gets returned by the allocator.
32 bit CPU’s having difficulty accessing greater than 4gb of memory was exclusively a windows problem.
I’m not sure what you are talking about. Linux got PAE in 1999. Windows XP got PAE in 2001.
Not really, Raspberry Pi had that same issue with its 32 bit distros.
You still had a 4GB memory limit for processes, as well as a total memory limit of 64GB. Especially the first one was a problem for Java apps before AMD introduced 64bit extensions and a reason to use Sun servers for that.
Yeah I acknowledged the shortcomings in a different comment.
It was a duct take solution for sure.
Interesting! Do you have a link to a write up about this? I don’t know anything about the windows memory manager
Intel PAE if the answer, but it still came with other issues, so 64 was still the better answer.
Also the entire article comes down to simple math.
Bits is the number of digits.
So like a 4 digit number maxes out at 9999 but an 8 digit number maxes out at 99 999 999
So when you double the number of digits, the max size available is exponential. 10^4 bigger in this case. It just sounds small because you’re showing that the exponent doubles.
10^4 is WAY smaller than 10^8
https://en.wikipedia.org/wiki/Physical_Address_Extension
It was actually 3gb because operating systems have to reserve parts of the memory address space for other things. It’s more difficult for all 32bit operating systems to address above 4gb just most implemented additional complexity much earlier because Linux runs on large servers and stuff. Windows actually had a way to switch over to support it in some versions too. Probably the NT kernels that where also running on servers.
A quick skim of the Wikipedia seems like a good starting point for understanding the old problem.
https://en.m.wikipedia.org/wiki/3_GB_barrier
Only slightly related, but here’s the compiler flag to disable an arbitrary 2GB limit on x86 programs.
Finding the reason for its existence from a credible source isn’t as easy, however. If you’re fine with an explanation from StackOverflow, you can infer that it’s there because some programs treat pointers as signed integers and die horribly when anything above 7FFFFFFF gets returned by the allocator.