Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Operating Systems Linux

Linux X86/x86_64 Will Now Always Reserve the First 1MB of RAM (phoronix.com) 77

AmiMoJo shares a report from Phoronix: The Linux x86/x86_64 kernel code already had logic in place for reserving portions of the first 1MB of RAM to avoid the BIOS or kernel potentially clobbering that space among other reasons while now Linux 5.13 is doing away with that 'wankery' and will just unconditionally always reserve the first 1MB of RAM. The Linux kernel was already catering to Intel Sandy Bridge graphics accessing memory below the 1MB mark, the first 64K of memory are known to be corrupted by some BIOSes, and similar problems coming up in that low area of memory. But rather than dealing with all that logic and other possible niche cases besides the EGA/VGA frame-buffer and BIOS, the kernel is playing it safe and just always reserving the first 1MB of RAM so it will not get clobbered by the kernel.
This discussion has been archived. No new comments can be posted.

Linux X86/x86_64 Will Now Always Reserve the First 1MB of RAM

Comments Filter:
  • by TomR teh Pirate ( 1554037 ) on Wednesday June 09, 2021 @07:25PM (#61471778)
    This should help with configuring EMS memory card configurations too. Sometimes it's hard to find a free 64k window
  • I can hear it now... (Score:4, Interesting)

    by TWX ( 665546 ) on Wednesday June 09, 2021 @07:42PM (#61471830)

    ..."but what f I need that RAM?"

    I did have Linux on a system with 8MB RAM, way back in the mid-nineties, but even having been in the situation I could accept the resource loss.

    Now if only I could bear to part with my 2GB iomega Jaz drive...

    • There's some embedded device or legacy system deployments on extremely memory constricted platforms that may feel this one, but they're probably capable of patching it out or porting the "wankery" to handle that MB.
      • Not to many of those legacy system deployments are going to be powered by x86-64 so they should be fine.
        • And no too many of those legacy deployments will be running Linux made in this decade either.
        • I could have misunderstood the article and the patch info, but it seems while the bug that triggered the change was for x86-64, the patch affects x86 (32 bit) too.
          But as The123king said, it's not likely to filter through to many of those embedded devices for a while yet due to the lag in that world.
          • There's no reason why it ever needs to filter into any non-x86 (32/64) platform, because the whole point of this is to avoid ill-behaved PC BIOS code, and no other platform will have that code to avoid. There are some embedded systems that are not PCs (and lack a BIOS) based around i386 class processors, and they might be affected if the feature isn't flagged carefully, but they're pretty rare.
            • There are a lot of devices out there based on things like Via x86 chips with single or double digit MBs of RAM still, which are basically PCs, and do have BIOSes (with well understood behaviour usually) which need every byte of RAM they can get. But as I said, the people who are developing for those systems can probably do the work required to patch back in the 1st MB handling if they really need it and do happen to be running bleeding edge kernels.
              Basically I'm saying there will be an impact here, but it'
    • I did have Linux on a system with 8MB RAM, way back in the mid-nineties

      Now I'm having flashbacks. I think the lowest spec'd system I ever installed Linux on was an 80386SX at 40 MHz with 4MB RAM, but there's a possibility I tried it on one of our 16 MHz 386SX machines, just to see how it would run. The 40 MHz machine worked as a serial port server, with a couple of 16550 UART cards talking to some VT220 terminals. (We were using pine for email in those days, and had onions tied to our belts, as that was the fashion of the time.) For a while, my main portable was an old ActionN [vogons.org]

      • by TWX ( 665546 )

        Lowest I ever put Linux on was a Macintosh Centris 660AV, a 25MHz 68040. I think I had an 8MB SIMM in addition to the 4MB soldered on for a whopping 12MB, but it possibly has a 16MB SIMM and 4MB on board for 20MB. Can't remember now.

        16.9 bogomips if I remember the output of /proc/cpuinfo correctly.

        At the time I did this I was rocking a 350MHz AMD K6-2, probably with something like 512MB RAM. Eventually I upgraded that AMD machine to 1.5GB RAM, there's a market sweet-spot where RAM is old enough that dema

    • My 1st thought is things like routers running linux. loosing 1mb of the typical 8/16/23mb on these is significant
    • Now if only I could bear to part with my 2GB iomega Jaz drive...

      You have a functional iomega Jaz drive? This should be on the Slashdot front page!

      • by sconeu ( 64226 )

        He never said it was functional

        • True.
          • by TWX ( 665546 )

            It worked the last time I used it.

            It's an internal SCSI model, I suspect that helped prolong its life since it was less subject to casual jostling/shock compared to iomega's external products.

            Damn thing set me back $400. To a poor college student that was a lot of money. Now I look at my 256GB memory card for my DSLR and I wonder why I bother keeping the Jaz2.

            • It's an internal SCSI model, I suspect that helped prolong its life since it was less subject to casual jostling/shock compared to iomega's external products.

              A school I went to at the time used the internal Jaz drives for audio and video production. The fact they were internal didn't help the failure rate. The disks themselves had to be well protected. A damaged disk could damage the drive as well. That damaged drive could then damage more disks, and cascade through all of them.

              • by TWX ( 665546 )

                I think that's why mine never failed, I was the only user of my drive and I never shared disks with anyone, so my disks never left my desk. They were either on the shelf or in the drive.

    • Comment removed based on user account deletion
  • But how will I boot it on my Z80 with 64K and 2.5MHz clock rate? Damn you, Torvalds!
    • Re:But... (Score:5, Funny)

      by Mal-2 ( 675116 ) on Wednesday June 09, 2021 @07:52PM (#61471870) Homepage Journal

      Not his fault you're trying to run x86-64 binaries on a Z80. Recompile, maybe it works.

      • by Z00L00K ( 682162 )

        Just load Z80 microcode into one of the CPU kernels.

        Imagine the speed of CP/M on a modern kernel at 3.0+ GHz with caches. About 1000 times faster than back in the days.

        • Every character to the console or serial port was its own system call on CP/M. A modern architecture like amd64 has pretty significant overhead for syscalls versus the simple jumps through the Zero Page(CP/M) or PSP(DOS). I'm sure the multi-GHz modern CPU is still faster by a lot, but the gains are somewhat suppress by the worse architecture.

          • if only there were some way to write to video memory directly, making everything you said stupid....
          • by narcc ( 412956 )

            Not one call per character all the time. I seem to remember that you could output a $ terminated string. (No, really.)

            The internet remembers better than I do. That was BDOS function 9

            LD DE, address_of_string
            LD CL, 9
            CALL 5

            But, yes, there are a lot of functions that just output a single character.

  • by Black Parrot ( 19622 ) on Wednesday June 09, 2021 @07:53PM (#61471874)

    Just by systems that don't have the first MB of RAM.

    • Re:Easier solution (Score:4, Interesting)

      by TechyImmigrant ( 175943 ) on Thursday June 10, 2021 @02:14AM (#61472538) Homepage Journal

      I predict that soon someone will map that free 1MiB somewhere else to fulfill some purpose. Then the logic to plug the corner cases will get really complex and someone will get their computer security PhD by coming up with some new aliasing attack, which will then become the basis of a new strain of ransomware.

      This is always how these things happen.

  • Late to the party (Score:5, Informative)

    by Anonymous Coward on Wednesday June 09, 2021 @08:26PM (#61471956)

    https://www.phoronix.com/scan.... [phoronix.com]

    tl;dr : Windows has been unconditionally reserving the first 1MB of RAM on both Intel and AMD systems for about 11 years now.

    • Yeah, but Windows memory requirements have been in the GBs for longer, so they can afford that assumption. On embedded systems, Windows "embedded"* requires less, but still in the hundreds of megabytes.

      Of course, one could argue it's unfair to compare a minimal Linux distribution with a Windows bundle, but that is the limitation they set themselves.

      * Last I checked, Windows "embedded" requires a top of the line desktop from the year 2000, and x86 only, and special BIOS, so they can still account for b
      • If you're running x86-64 Linux on a system with less than 512MB RAM, you really need to upgrade your computer.
        • Re:Late to the party (Score:4, Interesting)

          by The Evil Atheist ( 2484676 ) on Thursday June 10, 2021 @04:21AM (#61472692)
          My implied point is that they probably tried to avoid losing that first MB in the first place was because they didn't want a difference between x86 and proper embedded systems with their custom firmware that don't accidentally clobber that MB.

          And there's good use cases for x86 with 512MB of memory - virtual machines. Yes, I know things like Docker, Kubernetes, or whatever is the container-flavour-of-choice now are popular, but "proper" or "traditional" VMs are still necessary for decent isolation. If you can run a proper VM with 512MB of memory, then you can fit a lot more VMs on that one machine.
          • It's 2021. You can pick up an old workstation with 24GB DDR3 for less than $100. That's at least 24 VM's with 1GB RAM each. If you need 48+VM's for your particular application, i bet you can also afford 40+GB RAM on hardware that isn't older than most elementary school children.
            • It's 2021: you're not using an old $100 workstation with 24GB of DDR3 for something that requires speed and reliability for those ~20 VMs (you're probably still going to want to give your hypervisor some room to manoeuvre).

              Back when I was a tester, I'd run a few VMs for testing on my measly 16GB of laptop, while still requiring most of the rest of that 16GB for developing/compiling/debugging. Do you really think an employer would shell out for yet another computer if they could avoid it? It's not all abo
              • I can't really explain my horror at the insanity of running any significant quantity of VM's on a laptop. Honestly, that $100 workstation would be leaps and bounds ahead of even the fastest top-of-the-line laptop for that sort of task. Of course, any competent IT department would have some servers lying around as a test environment anyway.
                • Yet that insanity is reality for a lot of people in big corporations, trying to get the work done with what their IT department has decided they will get access to.

                  And for a lot of researchers who have no fix office to run their systems on, but have to rely on their laptop - simply because there is no space anywhere to put one of those old workstations.

                  And for piles and piles of students, who often end up in much deeper water than they ought to, needing to find limits on systems before they can get any kind

              • by DarkOx ( 621550 )

                Right people don't understand the cost of behind the glass in the dataceneter vs PCs.

                Every company everywhere would have a raspi sized SOC-C thing glued to the back of displays, and be using some kind of virtual desktop solution rather than PCs if it worked like the GP thinks. It would way easier cheaper to manage but its not.

                Lets say you have like 2000 PC users - that is a lot of idle resources but to have enough to give them a similar experience even thin provisioned in the DC its still a slam dunk the PC

      • Yeah, but Windows memory requirements have been in the GBs for longer, so they can afford that assumption.

        11 years ago the average shitbox PC had 1-2GB of RAM, hell most $300 Netbooks had that much. Linux could afford that assumption just as well.

        I think you just haven't realised how old you've gotten and what PC specs were like 11 years ago.

    • by F.Ultra ( 1673484 ) on Wednesday June 09, 2021 @09:36PM (#61472104)
      Truth be told some people say that Windows have been wasting RAM since far longer than just 11 years!
    • https://www.phoronix.com/scan.... [phoronix.com]

      tl;dr : Windows has been unconditionally reserving the first 1MB of RAM on both Intel and AMD systems for about 11 years now.

      How long did it take for Microsoft to pick up the (password) salt shaker? Have they found it yet?

      It's a bit silly for Microsoft to be bragging about party attendance when they were blackout drunk slamming shots of 190-proof Backwards Compatibility for a couple of decades.

    • by AmiMoJo ( 196126 )

      Linux was trying to be clever by only allocating areas known to cause problems, but it just wasn't worth the effort for the sake of about 512k of RAM split into small chunks that it allowed to be used.

  • I suspected that. Can we please list some of the big offenders and defective bios's / Drivers and or workarounds. In some countries you can return defective products, assuming you know or can prove a manufacturing defect. My HP with ancient Radeon graphics often wont boot windows, but resetting bios sometimes works after about 30 goes. Also note IBM mainframes have fenced bottom memory for security reasons, which they did oh, 30-40 years ago, which killed a lot of branch to x'00' instances.
    • Honestly I think the open source folks need to publish stand alone unit tests for BIOS and Linux compatibility. Something that a firmware engineer can setup easily that exposes the problems Linux typically faces. Not everyone would run it but if it saves one shitty laptop it's worth it for both the OEM and Linux community.

  • When Linux ran on a 386 system with only 1MB ram total

    • Ok I totally thought that was going in the direction of a "Pepperidge Farms remembers . . ." joke.

  • Greedy. (Score:5, Funny)

    by Tough Love ( 215404 ) on Wednesday June 09, 2021 @11:39PM (#61472334)

    Greedy. 640K ought to be enough for anybody.

  • ARM (Score:5, Interesting)

    by JBMcB ( 73720 ) on Wednesday June 09, 2021 @11:42PM (#61472340)

    Not directly related, but this is one of the reasons the ARM architecture is more efficient than x86 these days. BIOS corruption issues? This stuff should have gone away 20 years ago. When a PC in BIOS mode boots up, it still acts like it's going to boot into DOS. Microsoft, and Intel, need to pull an Apple, rip the band-aid off, and summarily dump all that legacy junk. It's gobbling up opcodes, silicon, and creating system weirdness that doesn't need to be there.

    Sure, maybe some new machines need to still boot into real mode for some reason. License that junk out to someone else, just like you can still buy second-sourced i486 cores if you need them.

    • Pure UEFI has been advancing. https://www.google.com/amp/s/a... [google.com]
    • You can do the same thing that Apple did on your PC. Go into your motherboard config and set it to EFI only.
    • Re:ARM (Score:5, Informative)

      by AmiMoJo ( 196126 ) on Thursday June 10, 2021 @07:49AM (#61472934) Homepage Journal

      Modern PCs don't act like they are going to boot DOS. In fact most ship with the compatibility module needed to boot DOS disabled.

      The issue is that the UEFI often has bugs, as do some firmwares and drivers like the mentioned Intel graphics.

      Getting rid of the BIOS wouldn't help, it would just be replaced by something else equally buggy. If Macs do get away with it (does anyone know if they keep that 1MB available?) it's only because Apple can get bugs fixed, not because ARM is somehow better.

      This isn't new either. AmigaOS reserved the first 4 bytes of RAM because writing to null pointers was such a common error and would crash the system if something important was kept there. AmigaOS apps look at address 4 to find the base address of system libraries that let them bootstrap.

      • by JBMcB ( 73720 )

        Modern PCs don't act like they are going to boot DOS. In fact most ship with the compatibility module needed to boot DOS disabled.

        The problem isn't BIOS per se, it's the cruft needed in the CPU to support old addressing modes to support BIOS boot. Does anything else use the old 16-bit addressing modes any more?

        Intel has already done this with Itanium. No 16-bit support at all, using EFI. I know someone who wrote drivers for Itanium systems (3d workstation cards), and said the weirdness in developing on a new ISA was negated by not having to futz around with the giant list of weird memory and addressing crap you have to worry about wi

    • by Junta ( 36770 )

      Not really. ARM in practice has had the following benefits:
      -More companies doing design work targeting the low cost market that is embedded. ARM therefore prioritized all sorts of nuanced sleep behaviors. Meanwhile those sleep behaviors (e.g. 'suspend', but still able to wake every 10 seconds or so to process very specific work) were not a feature desired by the desktop/laptop market). Around Windows 8 release, x86 chips largely gained these capabilities as well.
      -OS and application ecosystem designed aroun

      • by JBMcB ( 73720 )

        All this legacy junk is awkward to be sure, but by runtime it's pretty much gone, with little more than some awkward memory map to contend with.

        Memory modes aren't handled in microcode. They have to be supported by the MMU, which is all silicon. Ever see how much space the memory controller eats up on an x86 die? That's four operating modes, each with, at least, a half dozen addressing modes each, along with a few extensions. In every modern OS the CPU is using one of those. When booting UEFI, two are *never* used, and Virtual X86 is only used when you are using an emulator.

        Heck, modern x86 still supports PAE, which, I think, a couple of databases

  • Forsight (Score:5, Funny)

    by dohzer ( 867770 ) on Thursday June 10, 2021 @01:05AM (#61472458)

    This is essentially why I built my current system with 32.001GB of memory.

  • 640K oughta be enough for any OS.
  • I'm having weird flashbacks....

  • Emacs should then be renamed to nmacs.

  • It's easier to deploy X windows in 1MB blocks...
  • So in C, "pointers" are just numbered boxes for storing numbers. You set the pointer to the number of box you want to use. The you can put things in the boxes.

    int main(const int argc, char* const* argv)
    {
    int *pi0 = 0;
    int *pi1 = 1;

    *pi0 = 2;
    *pi1 = 3;

    print( "sum=%d\n", *pi0+*pi1 );
    }

  • by Crass Spektakel ( 4597 ) on Thursday June 10, 2021 @12:40PM (#61473888) Homepage

    My lowest spec linux/unix/posix system was a 386sx16 with 2MByte of memory. Using a manually optimized and build kernel I could do some textmode work - even at 100x60 character through the use of the ancient SVGATextMode toolchain but barelly anything else. I am ignoring the WRT54 router I tried using with 2MByte because it got me nowhere. Even addin 0,5MByte additional memory was greatly appreciated. Thankfully I got some 8MByte of memory for free shortly after that and besides that particulary awful 386sx16 I always had 8-16MByte.

    Btw, my lowest resource x86 was a non-IBM-compliant 8088 with 16kByte of memory and a tape drive at school. Funny enough it was able to run some very early self booting games like Psi5 Trading Company in pure text mode.

    On the other hand, my first computer ever was a CBM3032 with only 32kByte. Now imagine reserving 1MByte of memory on that one...

    • by hawk ( 1151 )

      never mind CBM, look at the early Apple ][ models.

      There were *two* 12k configurations available. The first was a straight 12k, contiguous. The second, though, had a 4k hole after the first 4k, so that both pages of hires graphics could be used.

      The memory was in the same holes; there were three dip sockets for which a header was made that specified the locations of each bank of memory, with three identical plugs used. You could use both 16k and 4k banks at the same time, for 20k, 24k, and (I think) 36k c

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...