Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Operating Systems Linux

Fedora Plans To Drop Support For Legacy BIOS Systems (linuxiac.com) 122

The Fedora 37 development team is considering dropping support for non-UEFI BIOS. Linuxiac reports: The Unified Extensible Firmware Interface, or UEFI, is a modern method of handling the boot process. UEFI is similar to Legacy; however, the boot data is stored in a .efi file rather than the firmware. In the case of Fedora, while the change may take some time, the new Fedora x86_64 installations will no longer work on non-UEFI platforms. On x86_64 architectures, Fedora 37 will mark legacy BIOS installation as deprecated in favor of UEFI. While systems already using Legacy BIOS to boot will continue to be supported, new Legacy BIOS installations on these architectures will be impossible.
This discussion has been archived. No new comments can be posted.

Fedora Plans To Drop Support For Legacy BIOS Systems

Comments Filter:
  • Ey-uh-uh-uh-uh-uhhhhfie
  • If only there were a way [rodsbooks.com] for BIOS-only PCs to support UEFI.

    FWIW, the link is almost 10 years old. No clue if this tool will work with Fedora 17 and beyond.

  • Bullet points (Score:5, Informative)

    by slack_justyb ( 862874 ) on Thursday April 07, 2022 @06:32PM (#62427164)

    Just to place some things down about BIOS. Pros and cons about it always welcomed in replies.

    BIOS, or at least the one we are so accustomed to, hails from the original IBM PC in 1981. The reason IBM's BIOS came to be known simply as BIOS is because of successful reverse engineering and the whole PC Clone stuff from days of hairspray and hostile take overs. Basically, everyone had a BIOS that was pretty much IBM BIOS that we all just started calling it BIOS.

    The B in BIOS stands for Basic, and boy is that an understatement. When your system powers on and begins to run the code in BIOS to bootstrap the OS, it's running in 16-bit mode. So BIOS only gets about 1MB of RAM to do literally everything it might need to get that OS up and running. Hence a lot of hardware needs a kind of fallback that BIOS can talk to. Not getting into the details here as I'm sure other's would be more than happy to expand upon, but basically, your system knows very little about itself by the time the OS is getting loaded. Again, there's arguments on why that's not BIOS' job and what not, but just a point to make here.

    BIOS needs to use the Master Boot Record (MBR) system for identifying how to boot the OS. However, MBR is limited to 32-bit for identifying the size of the boot drive (and of course it's signed, so really 31 bits). Which some of you might have ran into this, maybe not. But if you have an OS on a drive bigger than 2.1TB MBR will just straight up refuse to boot with that device, no matter when the actual boot partition is.

    BIOS also (subjective warning here!!) hasn't aged very well in the decades we've had it.(Okay, subjective off, but I'm going to justify that, just give me a second). ACPI is an addition to BIOS to add in all the power stuff and the hacky way vendors have had to "add" ACPI has lead to very interesting results, depending on what incantations your OS issues to work with ACPI. And ACPI is just one example of things we've bolted on along the way to BIOS. Now there's arguments for "clean slate" and "not BIOS' job to ensure it works". Again I'm sure everyone will have opinions on it.

    Intel came up EFI in 1998 to replace BIOS and Apple picked it up, but literally nobody else did. That spurred AMD, Intel, Microsoft, and a lot of PC (and Apple) makers to come together to build something that everyone agreed to. Which in 2007 that was UEFI.

    UEFI can now boot from a drive as large as 9.4ZB using the GPT system. Additionally, UEFI can run in either 32-bit or 64-bit mode granting way more space than BIOS had, meaning much faster bootstrapping. Additionally, there are EFI binaries that can preemptively load all kinds of drivers in so that your OS has more information about the system it is booting into. In fact UEFI adds in a lot of features on power management, network capabilities, graphic abilities, and so on. That you can have keyboard+mouse GUI with the ability to netboot from the UEFI (no longer do we need the PXE boot disk, it's baked into the hardware in a standard way).

    Now of course, the negative that I hear plenty bemoan. UEFI brings with it SecureBoot (insert gasps). I'm sure everyone could write a small book about this. So I'll just leave it at that.

    The thing to remember about Fedora's call is this. Systems already under legacy BIOS are still supported and as of F37, if you have a system that is BIOS only, you will still be able to install. What is being slightly curtailed by being marked deprecated is new install on Legacy BIOS on systems able to do UEFI. As of 2020, quite a few motherboard makers are no longer shipping legacy BIOS support in their boards. The Fedora people feel we've crossed the threshold ("debatable but you know whatever") of motherboards on market that no longer have BIOS to warrant this change.

    Ah but what of "older" computers. The thing is that there were a sizable number of mobos shipping UEFI in 2007 and 2009 would be a year you'd be hard pressed (not impossible) to find a mobo without UEFI. Now I'm

    • After a while, when Microsoft came out of the dark ages, the BIOS wasn't needed as much and became nothing more than a bootloader. Doing BIOS, or UEFI calls, in a modern OS is ridiculous. Especially Linux because it really doesn't use the BIOS much at all once you've started booting.

      Part of the problem is that the PC architecture is such a terrible ad-hoc mess that the BIOS/UEFI are really there just to hide all the creaky details.

      • Doing BIOS, or UEFI calls, in a modern OS is ridiculous.

        Linux syscall are pretty much analogous to PC BIOS calls, the concept lives on under a different name, nothing more.

        • Comment removed based on user account deletion
          • by drnb ( 2434720 )

            No, they're not even remotely similar. Indeed, they're polar opposites.

            They both represent an API offering low-level functionality where you pass parameters and returns in registers and invoke functions via interrupts.

            syscalls are how you communicate with the Linux kernel.

            You are confusing what they call with how they are called. Even with the former there is some similarity, BIOS was providing OS-like functionality.

            BIOS calls started off as how the operating system would access the hardware back when hardware was less standardized ...

            Absolutely wrong. BIOS started out absolutely standardized, one target, the IBM PC. The programmer's reference manual for BIOS calls was literally the IBM assembly language source code listing of BIOS. IBM would sell t

      • Re:Bullet points (Score:4, Informative)

        by tlhIngan ( 30335 ) <slashdot&worf,net> on Friday April 08, 2022 @05:57AM (#62428116)

        After a while, when Microsoft came out of the dark ages, the BIOS wasn't needed as much and became nothing more than a bootloader. Doing BIOS, or UEFI calls, in a modern OS is ridiculous. Especially Linux because it really doesn't use the BIOS much at all once you've started booting.

        Part of the problem is that the PC architecture is such a terrible ad-hoc mess that the BIOS/UEFI are really there just to hide all the creaky details.

        The BIOS is composed of many different pieces and parts. There's the bootstrap part of it which is the bit that performs basic power on tests, configures hardware and then does the necessary bits to load the OS - either via legacy 16-bit mode, or modern UEFI 32-bit mode.

        But there are other parts and those parts are important to operating systems. The bootstrap mode also produces a series of tables in RAM that tells the OS the hardware configuration - where chunks of memory are, what the PCI bus (including PCIe) configuration is, peripherals and other things. Sure the OS can probe and reconfigure things like this itself, but you'll find a good chunk of the OS needs to be loaded and running before you can do that. As such, knowing how to access deices "early" can be a great help in getting the system up and running.

        Example - in Linux, you don't see Tux or the console messages until after the console hardware is initialized - in the meantime, boot messages, including the Linux signon message, are buffered in the printk buffer until console initialization. Console initialization happens near the end of the kernel startup sequence when the kernel is probing for hardware and discovers what would be the console device which is either a physical console, a framebuffer, serial port, etc. Once the console is initialized, the printk buffer is emptied

        But during this initial startup, the peripherals are usable to the operating system. Linux has "Early printk" that can use these pre-initialized devices to output printk messages basically from the get-go, and once the kernel has initialized itself tot he point it can run the console itself, it disables early printk and finishes the console initialization by dumping the printk buffer.

        I wouldn't doubt Windows is also using the pre-initialized peripherals for things like the kernel debugger to the splash screen shown on boot.

        Another module is ACPI. These started as a basic table for power management - power management on PC was a finicky affair since laptops existed long before any form of OS managed power, so the early revisions of PC power management was done by the BIOS. But by 1995 this was unworkable, so as part of the Plug and Play specification came Advanced Power Management where the OS itself would manage the system power states. This later evolved into ACPI which is a bytecode system that basically are OS independent drivers to allow various parts of the PC to be controlled - from things like switches (e.g.., lid closed on a laptop) to batteries, to even things like sleep, suspend, hibernate and even power off.

        It's the primary reason why BIOS updates can lead to improved system stability - it's not that suddenly they load the OS better makes the system more stable, but core critical functions the OS needs to do often have bug fixes in the OS independent driver and make the system more stable by fixing bugs in the way the driver works.

        So while the old legacy unctions of a BIOS no longer are used (they were useful in the 80s and lead to a rise in non-IBM clones that could run MS-DOS software, called MS-DOS clones. These were often not IBM PC compatible at all but were capable of running most major applications like Lotus 123 and Word Perfect at the time. But to say it's completely useless after the OS boots is also incorrect

    • AFAIK you can boot from a GPT drive with BIOS (or in legacy mode), just that you need to create a "bios-grub" partition as the first partition on the drive.

      • by vbdasc ( 146051 )

        Yes, but this "GPT drive" contains an actual MBR, per GPT specification, so to a GPT-unaware software the GPT drive mimics a familiar MBR drive.

        • Sure. It still means you can boot from a large drive using BIOS, either "real" BIOS or "legacy mode" of a newer device.

    • by drnb ( 2434720 ) on Thursday April 07, 2022 @11:36PM (#62427726)

      The reason IBM's BIOS came to be known simply as BIOS is because of successful reverse engineering and the whole PC Clone stuff from days of hairspray and hostile take overs. Basically, everyone had a BIOS that was pretty much IBM BIOS that we all just started calling it BIOS.

      It was simply BIOS before the clones. The IBM source code listing to BIOS was the official API documentation. You could buy the BIOS listing from IBM in a little maroon (?) three ring binder. You needed the assembly language BIOS listing to find out what registers held and returned what parameters for calls to BIOS. PC software rarely made calls to DOS only, many times you needed to call BIOS. PC apps were generally a combination of DOS and BIOS calls.

      That's why the cloning that came later was so important. Much of the software running on the PC was coding to an API defined by a single assembly language source code example, "the BIOS".

    • by AmiMoJo ( 196126 )

      I don't get the hate for Secure Boot, I use it extensively to protect my systems from malware and from tampering.

    • by vbdasc ( 146051 )

      BIOS needs to use the Master Boot Record (MBR) system for identifying how to boot the OS. However, MBR is limited to 32-bit for identifying the size of the boot drive (and of course it's signed, so really 31 bits). Which some of you might have ran into this, maybe not. But if you have an OS on a drive bigger than 2.1TB MBR will just straight up refuse to boot with that device, no matter when the actual boot partition is.

      BIOS does NOT need MBR to boot an OS, and has never needed it. The MBR is not a part of the BIOS specification, but a convention and an industry standard, created by IBM for their IBM PC/XT computer, and faithfully followed by all clone makers and OS makers and diagnostic software makers since then. BIOS only needs to be told the disk location of a boot sector, which then gets loaded in memory and jumped to, and while MBR can be used for this purpose, its use is not mandatory.

      About the large disks - the "in

  • UEFI sucks sometimes (Score:5, Informative)

    by psergiu ( 67614 ) on Thursday April 07, 2022 @11:49PM (#62427740)

    Given a Dell server:
    BIOS will boot the 1st device in the list then the next ones in order. The "Hard Disk Failover" mode will try booting from subsequent HDDs if the 1st fails - excellent when RAID disasters happen.
    UEFI needs to have the OS registered with the UEFI device map and if that's ever disturbed you get a server waiting in the UEFI shell requiring manual intervention.
    BIOS defaults the console to 80x25 text mode. UEFI will try to detect the maximum resolution on a monitor and might leave you in a very high resolution framebuffer mode - the last thing you need when you have to fumble on the console in the middle of the night from a tiny iDRAC remote window or on the datacenter floor with crash cart with a LCD that will proudly say: "Resolution out of range"
    Yes, you can switch even the newest Gen 15 servers to BIOS boot mode.

    • by La Gris ( 531858 )

      There was a time where you'd have a default RS232 9600BPS-8N1 console for servers, both at BIOS level, but also in LILO and system console if you choosed to.

      It was very handy to just plug a terminal and look at the output, with bonus you could have it grouped into a console server and remote telnet/ssh to it, so you didi not need to enter the noisy cold DC, but handle it remotely even at home if you were on an astreint (call-out duty).

      • by psergiu ( 67614 )

        Terminal MUX in every rack + serial cables for every server = unhappy datacenter people (more cables) and unhappy beancounters (extra cost) and unhappy windows admins (windows server without GUI ? Blasphemy !)
        There's IPMI SoL (Serial-over-Lan) but security people say IPMI connections are not encrypted enough to their liking for a root password.

    • UEFI needs to have the OS registered with the UEFI device map

      No it doesn't, but it *can* be setup that way. If this is an issue for you then you should select only CMS options in your boot order.

      UEFI will try to detect the maximum resolution on a monitor and might leave you in a very high resolution framebuffer mode

      It'll match the resolution of the display attached. If your display doesn't support detecting the resolution maybe you should fix that.

      Honestly it sounds like you're battling with a) crap hardware, b) a singular crap example of UEFI implementation, c) user error and somehow blaming that all on the existence of UEFI.

      Don't do that. It's not helpful.

      • by psergiu ( 67614 )

        No monitor attached. It's a server in the datacenter. Yes, some other hardware and implementations might be better, that's why I put "sometimes" in the title.
        Which other (non-Dell) servers have you seen with better UEFI implementations ?
        Thanks !

    • by jabuzz ( 182671 )

      What you missed is with UEFI boot time are massively slower. It was bad enough booting a server in BIOS mode. In UEFI mode boot time are just terrible.

  • I can't get the latest Ubuntu to boot on non-UEFI hardware. This is for a kid's computer to keep an old, yet decent system out of the landfill.

    Any suggestions for other distros?

    • by mz721 ( 9598430 )
      I recently installed Star Linux, which is essentially Devuan, which aims to be Debian but without systemd, on an Dell e520 from 2006. That is a BIOS machine, and I didn't have to think about it, I just did the install and it all works. I think some of the distros focused on modern hardware and servers and that will drop BIOS, but it will hang on at the edges of Linux world for a long time. You can always install FreeDOS!

      I too have set up Linux machines for kids to get extra use out of old hardware. Depe
    • by vbdasc ( 146051 )

      Have no idea about Ubuntu, but Debian still works on BIOS systems. And Slackware will surely support these for a few decades to come :)

    • by Reziac ( 43301 ) *

      I use PCLinuxOS with KDE desktop. Runs well on any x64 (I have it on a 15 year old laptop). On an i7-3xxx, it boots to the desktop in 5 seconds flat.

      -- Caveat: something in the current kernel does not like some older ATI GPUs, and won't install on 'em. Wasn't just us affected, but be aware.

  • unless something changed, having your system setup to boot uefi means that I now cannot (easily) just take a dd of a full /dev/sda and then move it to another system and boot from it.

    I always turn off uefi so that I CAN do this and clone systems at work for labs.

    odd that systems that are installed fresh with uefi seem to have faster disk i/o than the same init'd disk but with old style geom and setup. all using aligned sectors, of course.

    I dont use redhat crap anyway. my last good redhat experience was 4.

  • by mrfaithful ( 1212510 ) on Friday April 08, 2022 @08:30AM (#62428434)
    Sure, I accept that BIOS had to go; complicated fiddles and workarounds and emulation to bootstrap and chainload things more complicated than a single desktop with a single drive. But my problem is that for the past 10 years I've been taking old windows boxes from various vendors or even custom built stuff and trying to put some form of linux or bsd on them and I find that it's successful maybe 40% of the time. Whether it's lenovo and their "secure boot really only checks for the windows boot loader and fails everything else" or Dell and their "storage controller will fail" version, or ASrock and their "fuck linux" implementation it is to my long standing irritation that I wind up having to just boot legacy bios and install it that way. Of course, this is all down to shitty hardware vendors but it means that if all the major distros start following suit I might just have to landfill otherwise working PCs because the UEFI on them is crap when all I needed was the simplest boot imaginable. I feel like UEFI is a very "enterprisey" standard that got bloated and crufty from the word go and it really stands in the way of a simple life.
  • Wrong move. Bad decision. Stupidity.

  • - OpenSSL 1.2 FTP requires OpenSSL 1.2 download TLS 1.2 to impress me (openssl.org) DON'T use package managers. Submitted by neelb4me on Saturday April 09, 2022 @04:54PM neelb4me writes: You can't connect to 80% of the web since the 2020 TLS update with a system or OS before 2020. OpennSSL.org's FTP site requires TLS 1.2 to download TLS 1.2. Why is mozilla.org providing this inside Firefox via the web? It's the only way to use the internet at all blindly.

"The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts." -- Bertrand Russell

Working...