Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware Hacking Software Linux

Get Speed-Booting with an Open BIOS 235

An anonymous reader writes to mention that IBM Developer Works has a quick look at some of the different projects that are working on replacing proprietary BIOS systems with streamlined code that can load a Linux kernel much faster. Most of the existing BIOS systems tend to have a lot of legacy support built in for various things, and projects like LinuxBIOS and OpenBIOS are working to trim the fat.
This discussion has been archived. No new comments can be posted.

Get Speed-Booting with an Open BIOS

Comments Filter:
  • Flash drives (Score:5, Interesting)

    by drivinghighway61 ( 812488 ) on Wednesday October 10, 2007 @01:25PM (#20929055) Homepage
    Speeding up BIOS processes combined with flash boot drives will seriously decrease loading time. Are we closer to instant-on computers?
    • Re:Flash drives (Score:4, Interesting)

      by CastrTroy ( 595695 ) on Wednesday October 10, 2007 @01:30PM (#20929125)
      I seem to remember the Commodore 64 being instant on. Granted our current computers are much more complicated than a Commodore 64, but it would be nice to get back to that instant-on era. Everything else seems to have gotten faster, or remained the same speed, the only thing that seems to continually get slower is boot times.
      • Re: (Score:2, Insightful)

        by Nazlfrag ( 1035012 )
        I can remember the Amiga taking 1.5 seconds to boot into a full multitasking GUI, and rendering a fractal while it boots. One day they'll get it again, thanks to the success of OSS.
        • by joe 155 ( 937621 )
          I had an Amiga, they were ace. It was so long ago though and all I really used it for was Pang (what a game) so I can't really say how good the boot time was to full gui and fractal... But I am reminded of a saying that I read somewhere I can no longer remember (perhaps wikipedia); software is getting slow faster than hardware is getting fast - or at least "faster" than the software can take advantage of. I'd say that we might never get back to one second boots with any major OS (although I have heard good
        • Re: (Score:3, Informative)

          by Saxerman ( 253676 ) *
          The original Amiga 1000 had the Kickstart ROM chips, which allowed them to boot nigh-instantly. This included the important parts of the OS, and later even drivers and the kitchen sink. You would literally have a splash screen for a second, and then having a functioning computer complete with GUI. Of course, this means surgery was required to swap in a new Kickstart ROM. And as later software required different versions of Kickstart to run, we started playing with different software kickers which allowe
          • Re:Flash drives (Score:4, Insightful)

            by SenorCitizen ( 750632 ) on Wednesday October 10, 2007 @03:10PM (#20930623)
            No, that's not quite right. The original Amiga 1000 didn't come with Kickstart ROMs, because the OS was still in a state of change. Instead, you had to load Kickstart from disk, and it ate up 256k of the 512k of RAM installed. Later Amigas came with ROM Kickstarts and could be started without a disk. The full Workbench environment still had to be loaded from disk, just like with the A1000 - on which you actually needed two disks to get the whole OS up and running.

            The Atari ST, on the other hand, had the whole OS in ROM, except for the very first models. Even STs weren't instant-on though, because the bootloader would waste at least half a minute looking for a disk to boot from - it was actually faster to have a GEM disk with custom settings in the drive when turning the power on than booting from ROM only.

            • Re: (Score:2, Informative)

              by domatic ( 1128127 )
              You didn't even need a GEM disk. A blank floppy would suffice to satisfy the on-boot disk check. Gosh, I haven't thought about doing that in years. It used to be a way of life.........
          • I think you have it backwards there partner... My A1000 (an original from Oct. 1985) required the "Kickstart" boot disk, and THEN you booted "AmigaDOS".

            My A2000 had Kickstart in ROM requiring only the AmigaDOS disks. I upgraded those from 1.3 to 2.0.

            The reason was that the ROM portion (Kickstart) was in flux when the A1000 came out, and they knew that they would need to update it several times, and didn't want to deal with swapping ROMS all the time. The original A1000 was hardly polished.

            None of the Amigas
            • None of the Amigas I had (A1000, A500, A2000, and an A3000) were instant on at all. They all required AmighaDOS via floppy or hard disk to boot the rest of the way.

              Pah. First thing I did when I got 1MB, then 1.5MB of memory(!) was follow the instructions in the manual to make a "Recoverable RAM Drive" (RAD:) that was bootable. :)

            • Re: (Score:3, Interesting)

              by rs79 ( 71822 )
              Correct.

              Its predecessor the Atari 800 was instant on. Pop in the basic cartridge, turn the power on and ther you were. In Basic land. At least the Amiga had C.

              And cool graphics.

              I was shocked to find there were people (Mike Meyer, how can I remember this stuff and still not remember where I put my glasses?) that didn't give a rats ass about the grpahics but just wanted to code C on a CPU with a linear address space ("segments are for worms").

              Matt Dillon (Of FreeBSD and Dargonfly BSD fame) ported Bash to the
          • Re: (Score:3, Interesting)

            by GreggBz ( 777373 )
            You definitely have it backwards. The A1000 had no Kickstart ROM, you had to load it from disk. This changed with every model afterwards. I will say that it took some time to load Workbench, with that drive churning away.

            If you just wanted a shell and bypassed your startup sequence it was faster still. Even with the full blown GUI, it was still much quicker then it's PC counterparts.

            Now, an Amiga with a flash drive, that's a sight to see. My A1200, which once had a 44 pin IDE-Flash adapter booted all
          • by Bert64 ( 520050 )
            You have it the wrong way round...
            The A1000 loaded kickstart from disk, and used 256Kb of it's 512KB of ram to store it. Later machines (except the A3000) had kickstart in rom. Kickstart includes most crucial parts of the OS, but you still needed a floppy or some other media to boot from. Although workbench was in rom, you needed the "loadwb" command which was disk based. Although the A4000T had the workbench.library on the hard drive.
            My amiga would take about 6 seconds to boot, that includes drive spin up/
      • Re: (Score:3, Insightful)

        Sure, the C=64 was instant-on, but you had like 30 second seek times for the floppy disk, which is where anything exiting to run lived. If all you wanted to do was simple command line instructions to "load $program * , 8" you could call it "instant on" but you got almost nothing for it, and to get to any user app functionality up and running, it still took a long time, vastly longer than it does now.
      • Re:Flash drives (Score:4, Interesting)

        by Waffle Iron ( 339739 ) on Wednesday October 10, 2007 @01:59PM (#20929523)
        Sure, the C64 booted in a jiffy. Then it took 5 minutes to load a 50 KB app from the floppy drive. (Which is kind of silly, since the floppy drive had a CPU inside with as much horsepower as the main system unit itself.)
        • Re:Flash drives (Score:4, Interesting)

          by alan_dershowitz ( 586542 ) on Wednesday October 10, 2007 @02:48PM (#20930271)
          If you plugged in a cartridge it was instant-on for those apps too. Now there's even a peripheral, the MMC64 [wikipedia.org] that lets you use SD cards on your Commodore 64, so I don't see anything that indicates we couldn't have instant on for writable media either nowadays, which was the point of the original post.

          Incidentally, did the 1451 drive have as fast a CPU as the Commodore 64? I know the 1451-II's CPU was actually faster, and you could actually offload CPU processing to it across the serial interface, and some games even did this.
        • by rucs_hack ( 784150 ) on Wednesday October 10, 2007 @03:11PM (#20930641)
          All you C64 people, grr.

          My spectrum was awesome, and by dint of the fact that I couldn't afford a C64 (or even a Vic 20), I opted for the '48k ZX spectrum beats your computer any day' line of reasoning, and affected temporary blindness when anyone started showing off sprites.

          Ah yes, the hours of tapping away on a rubber keyboard. Hungry Horace, oh how many evenings you ate.

          I took it out of storage and showed my son last year. He looked at it in a puzzled fashion and asked where the dvd drive was.

          Crying is not manly, so I just mumbled and put it away again..
      • Re:Flash drives (Score:5, Informative)

        by Cyberax ( 705495 ) on Wednesday October 10, 2007 @02:00PM (#20929559)
        I work with embedded systems, and my MIPS-based 166MHz board boots Linux in about 5 seconds, kernel loading starts almost immediately after power on.

        I always wanted to have the same capability for my notebook. Sigh...
        • Re: (Score:3, Insightful)

          by droopycom ( 470921 )
          The thing is if you want capabilities on your embedded systems to come close to your notebook's capabilities, they will take 1 minute to boot instead of 5 sec.

          My CD player used to boot instantly... but just try to boot an HD-DVD or BluRay player....

          • Re: (Score:3, Informative)

            by Cyberax ( 705495 )
            This embedded system contains:
            1. Three Ethernet controllers.
            2. VGA contoller.
            3. Two RS-232 controllers.
            4. Flash drive.
            5. PCI bus.
            6. WiFi card.
            And it starts booting Linux kernel in less than a second after power on.

            I don't see why my notebook should take about 8 seconds JUST TO START BOOTING THE KERNEL.

            Current HD players are too brain-damaged, so it's not a good comparison.

      • I seem to remember the Commodore 64 being instant on.


        Really? I remember waiting a very long time.
        One of the major advantages of Fastload (TM) was that it by-passed the god-awful slow memory test on power up.
        I suppose "instant" is a matter of perspective.

        -- Should you believe authority without question?
      • I'm completely behind you - this would be really nice, but it doesn't seem to be the way technology has been panning out. Every new device I get takes longer than the last one: Cable boxes, dvd player, music players, even cellphones. It seems every time I upgrade there is more crap I don't use and it means I need to wait longer from the time I press "on" until the time I can actually use it.

        I hope these folks are successful and can lend some advice to device manufacturers as well!
    • Re: (Score:3, Interesting)

      In Windows you can have the computer's current state be saved and loaded the next time the computer is booted. Is there an option to load the state of the machine on a fresh boot? That would save time on reboots.
      • Re: (Score:3, Interesting)

        by vux984 ( 928602 )
        Is there an option to load the state of the machine on a fresh boot? That would save time on reboots.

        I have 4GB of RAM (2x2GB) with Windows x64, and expect to have 8GB within the year. I've tried the hibernate or whatever its called. Its not really faster. The time time to save and load that 4GB file is non-trivial. In theory its nice when you've got a lot of open stuff on the go, but then I don't trust it enough not to save all my work properly anyway.

        Overall, I don't think its that great, and I particular
        • by hjf ( 703092 )
          what do you want 4GB of ram for anyway?

          OK let me rephrase: what do you want 4GB of RAM for anyway, if you don't have a RAID 0+1 array of Seagate Barracudas to make disk writes quick?

          I use my (desktop) computer with S3 suspend. 5 seconds later and it's on again (it takes more time for my monitor to wake up). There's only one problem though: sometimes, the Bluetooth dongle takes a longer nap and I have to wait about 30 seconds to have my keyboard back.
          • by vux984 ( 928602 )
            what do you want 4GB of ram for anyway?

            My memory usage when running my usual set of apps (excluding vmware) is around 2.5GB to 3.0GB. My comment about upgrading to 8GB was based largely on my use of VMWare; and the fact that I run various linuxes, along with XP Pro and Vista x32 as guest OSes (not all at once, but usually more than one); and that uses gobs of RAM.

            OK let me rephrase: what do you want 4GB of RAM for anyway, if you don't have a RAID 0+1 array of Seagate Barracudas to make disk writes quick?

            I
          • what do you want 4GB of RAM for anyway

            Not sure what HE wants it for, but I use it for running multiple virtual machines. Also, the extra ram is nice for disk cache - especially if you are compiling.
        • I also have 4 GB of RAM on XP x64, and I use S3 suspend (Suspend to RAM - i.e., not hibernation) all the time. For me, it is significantly faster both to shut down and come back up, and it has the great advantage of turning off all the fans, etc., on the computer, so that it doesn't make any noise. It works flawlessly probably about 99% of the time.

          I agree though, hibernation is often more of a pain than it's worth.
          • by Bert64 ( 520050 )
            Apple's "safesleep" is good...
            It suspends to ram, but also writes out the memory to disk as if it was hibernating. So if you lose power while in sleep mode, the machine comes back online properly (it just takes longer).
        • by Bert64 ( 520050 )
          It should only swap out the whole 8GB if your actually using all that memory, at least software suspend on linux works this way.
          I figure if i'm running so many apps that i'm using 8gb of ram, then loading memory from disk will actually be quicker than loading all those apps manually.
          Also, at least with software suspend it can compress the image and it resumes fairly quickly, because it swaps the core kernel back in and swaps all the apps back in later (chances are you wont be actively using every app your r
      • by Korin43 ( 881732 )
        Doesn't that get rid of the point of rebooting? I use hibernate when I want to stop using power, and reboot with Windows gets screwed up..
        • My point was having the option of loading a fresh machine state upon reboot such that loading this fresh state or booting the normal way would boot the machine to the exact same useable state. This machine state could be compressed such that it would take too long to extract and load. Another poster had expressed concern for loading 4GB of machine state. Well, on a fresh boot, most of that 4GB is filled with nothing relevant.
    • weren't uncommon in the early 80's - but they had a ROM basic built in. When we got the PC there was suddenly need for a lengthy and cumbersome procedure before the computer even was remotely usable.

      All those procedures performed in BIOS today are often unnecessary unless you run a legacy operating system like DOS that actually uses the BIOS. Linux as an example only uses BIOS for a limited amount of tasks while it does most of the hardware management in the kernel without much need for BIOS to be around.

  • by CodeBuster ( 516420 ) on Wednesday October 10, 2007 @01:29PM (#20929111)
    Isn't it more important for the BIOS to present an efficient abstraction of certain hardware resources that *any* OS can easily communicate with according to a standard interface than to optimize support, possibly at the expense of flexibility and abstraction, for a single OS (even if that OS is Linux)? The violation of abstraction merely for performance improvements is something that engineers should generally be very reluctant to do.
    • In theory, yes. (Score:5, Insightful)

      by khasim ( 1285 ) <brandioch.conner@gmail.com> on Wednesday October 10, 2007 @01:35PM (#20929205)
      But the problem is that the BIOS's cannot be trusted today.

      So the more advanced operating systems probe the devices themselves to see what capabilities are available.

      We've arrived at the point where we need to choose between updating the BIOS's on the motherboards every time a new capability is added (and all previous motherboards) ... or just simplifying the BIOS to the point where it can boot the OS and allow the OS to probe everything.

      It's easier to update the OS than the BIOS.
    • Abstraction is important if you're sending boxes out the door not know what they'll be doing. If you know what they'll be doing (ie. booting a linux kernel, etc), you can optimize them for your selected environment.
      • by CyberLord Seven ( 525173 ) on Wednesday October 10, 2007 @02:04PM (#20929609)
        Danger! Will Robinson! Linux boxen tend to be used far longer than Windoze boxen.

        Purely anectdotal, but I see a LOT of Linux boxen that are very old running not so old Linux kernels.

        This means, over a period of time, you have a greater chance of creating a NEW Linux only legacy support issue with newer kernels running on old machines.

        This should not stop progress, but it is something that should be recognized up front.

        • Re: (Score:2, Interesting)

          by phantomlord ( 38815 )
          My nntp/webserver:

          cat /proc/cpuinfo
          processor : 0
          vendor_id : AuthenticAMD
          cpu family : 5
          model : 8
          model name : AMD-K6(tm) 3D processor
          stepping : 12
          cpu MHz : 451.021

          # cat /proc/version
          Linux version 2.6.22-gentoo-r6 (root@hell) (gcc version 4.1.2 (Gentoo 4.1.2)) #1 Fri Sep 14 15:07:46 EDT 2007

          IIRC, I bought the processor and motherboard in May 1999, probably around the same time I finally registered my /. account.

      • by Intron ( 870560 )
        So if you know they will be running Windows, it should be OK to prevent loading any other OS? WRONG
        • If it's my computer then damn straight.

          See, I've got a Linux machine. I've got 2 Macs. I've got 3 Windows machines (the one on my desktop, the laptop I use to play movies on my TV, and the laptop that I actually use as a portabl), and a hacked X-box - yeah, I'm a geek.

          That Windows desktop machine? It's gonna run Windows. FOREVER, until I throw that bad boy in the trash it's gonna be a Windows machine because I have other OS's on other machines. And no, I don't really care if whoever pulls it out of the
    • by krog ( 25663 ) on Wednesday October 10, 2007 @01:44PM (#20929341) Homepage
      Modern OSes don't trust what the BIOS tells them, due to older BIOSes that can't be trusted. With this fact in mind, you can imagine how getting the BIOS mostly out of the way can gain a few seconds at boot time without losing anything practical.
    • Re: (Score:3, Insightful)

      by ultranova ( 717540 )

      Isn't it more important for the BIOS to present an efficient abstraction of certain hardware resources that *any* OS can easily communicate with according to a standard interface than to optimize support, possibly at the expense of flexibility and abstraction, for a single OS (even if that OS is Linux)?

      Why ? Does any OS actually use BIOS for anything except booting anymore ? AFAIR even most DOS programs bypassed BIOS screen routines (which is why redirection didn't work so well on DOS) and talked to the

    • Re: (Score:3, Interesting)

      Funny that is what the OS is supposed to do also. But now they come with stuff built in. Maybe if the BIOS was left alone and the OS was fixed to do just what it was supposed to do and not worry about the rest of the crap.

      It does not matter if I run Linux, or Windows they both start with crap running in the background. A normal user has no clue what is running. Why not when you install the OS you just ask "Do you want a Firewall? Do you want a Server? Do you want to update your system time over the inter
      • I am just saying Let the Bios do its job...boot the system.

        It is still wise (at least in theory) to allow the BIOS to handle some low level hardware issues behind the abstraction barrier, at least until the OS specifically overrides a certain function for direct control (assuming that it is logical to allow such a low level override). The abstraction layer allows both the OS and the BIOS to vary independently without causing changes in each other and that is a good thing. Suppose, for example, that your
        • Re: (Score:3, Interesting)

          by vidarh ( 309115 )
          I don't think any reasonably modern OS actually does what you suggest, other than possibly as a last resort fallback (though I'm not even sure anybody does that). Linux for example can run perfectly fine on a BIOS-less system with only extremely minor modifications in the boot/init code (I don't know if even that is needed anymore - it's been years since I did that last).
    • Generally? What if performance is your goal?
    • Re: (Score:3, Informative)

      by Anonymous Coward
      Isn't it more important for the BIOS to present an efficient abstraction of certain hardware resources that *any* OS can easily communicate with according to a standard interface than to optimize support, possibly at the expense of flexibility and abstraction, for a single OS (even if that OS is Linux)?

      These guys are simply taking advantage of the fact that the BIOS is an unusably bad abstraction. Linux doesn't make BIOS calls, nor does Windows (since before Windows 2000). If you're booting Linux and XP,
      • by empaler ( 130732 )

        Linux doesn't make BIOS calls, nor does Windows (since before Windows 2000).
        I was about to refute you, but according to MS KB321779 [microsoft.com] you are right on the money. Windows NT 4 (not really that surprising and Windows 98 (WTF?!) was "Plug and Play Capable". For some reason they're also asserting that WinME is one, but that must be a typo.
  • by BadAnalogyGuy ( 945258 ) <BadAnalogyGuy@gmail.com> on Wednesday October 10, 2007 @01:31PM (#20929151)
    The majority of boot time is spent initializing drivers and bringing the system to a usable state. The 3 seconds it takes for the BIOS to init the disk, locate the MBR, load the bootloader, and jump to it is negligible compared to the tedious hardware scanning and initialization done by the OS itself when it is finally loaded by the bootloader.

    If you want to speed up the boot sequence, take a look at cutting the number of attached devices down to the bare minimum. Don't start any services during init. Do as little as possible to get the system to its usable state and you'll have minimized the boot time. Unfortunately, technology just doesn't work that way. System requirements (of both a hardware and a software nature) will require that you perform extra initialization at boot time, so any possible gains are already offset by the increased load.

    Getting off of x86 may be one way to optimize the boot process, but how many of us really have the wherewithal to make an architecture jump from x86?
    • Re: (Score:3, Insightful)

      by Chirs ( 87576 )
      I don't know what hardware you have, but it takes a LOT more than 3 seconds for my machine to do its POST, check the floppy drive, check the CDROM, check the SCSI cable, find my hard drives, check the partition tables, and finally start up my bootloader.

      Probably more like 30 seconds.
      • by jabuzz ( 182671 )
        30 seconds, that is positively speedy. On my servers it is around 150 seconds before the bootloader even gets a look in. Admittedly a good chunk of that is as it spins up the drives in the RAID array one by one. You can turn it off, but if you turn all the servers on at once (like after a power cut and they automatically restart) there is a nasty spike as the load on the UPS is pushed to just shy of 100%.
    • by KC1P ( 907742 ) on Wednesday October 10, 2007 @02:02PM (#20929585) Homepage
      You're absolutely right. It seems like every OS (including Linux) goes through this -- in the early days it boots much faster than the competition, but once people start routinely layering all kinds of junk on it then it starts taking minutes to boot even on super-fast hardware.

      What really bugs me is how much of the startup config is done serially. A lot of startup tasks take time, and step N+1 has to wait until step N is finished whether or not it depends on that step. It seems to me that it would be worth the trouble to mechanize startup so that each step is isolated from all the others and knows which previous step it's dependent on and waits for only that step, while everything else cruises ahead in parallel. It'd be a big change from the way things are done now but it'd be worth it. Having my system stop dead for 60 seconds on every boot just because one of the NICs is unplugged (so DHCP isn't answering) is really annoying. Same deal with Apache choking on virtual domains ... one at a time ... if the name server isn't answering. All those "wait X seconds for Y to happen" things can really add up.

      Also, Linux isn't the entire universe, and some of us really do use those legacy BIOS features. Backwards compatibility is the *only* reason the PC architecture has survived, so deciding to toss that to the wind now is just stupid. The cost is minimal (it's not like the code is going to change once it's written) and if whipping up a few tables and setting a couple of INT vectors is honestly adding dozens of seconds to the boot time, well that's just programmer incompetence, it's not the architecture's fault. The rest of the older BIOS code doesn't do anything if you don't call it, so this just sounds like an excuse to be lazy.
      • by hjf ( 703092 )
        What I don't like about Linux is the amount of unnecessary services installed by default. For example, I have an old computer (P133/16MB RAM), and Debian 3.1 on it. Debian demands that I run Sendmail (or Exim, or Postfix). Why can't I live without a MTA? I doubt that regular home users actually send their e-mail through their local MTA, they probably use their ISP's SMTP, which, I think, it's the proper way to do it.
        • What I don't like about Linux is the amount of unnecessary services installed by default. For example, I have an old computer (P133/16MB RAM), and Debian 3.1 on it. Debian demands that I run Sendmail (or Exim, or Postfix). Why can't I live without a MTA?

          Because the standard way to send mail on an unix box is to send it through a 'sendmail' program, which is provided by Sendmail (or Exim or Postfix)

          I doubt that regular home users actually send their e-mail through their local MTA, they probably use their ISP

        • In addition to what your other responder said, daemons are sometimes configured to notify the admin user of critical errors by sending mail, in case the admin isn't a vigilant log-checker. This is why Debian asks what user should be mail-aliased to root. In this case, that user would probably be you, and you might want to know when/why your cron'd apt update failed.
      • by Karellen ( 104380 ) on Wednesday October 10, 2007 @02:48PM (#20930279) Homepage

        It seems to me that it would be worth the trouble to mechanize startup so that each step is isolated from all the others and knows which previous step it's dependent on and waits for only that step, while everything else cruises ahead in parallel.

        We're working on it... [ubuntu.com]
      • by Cycon ( 11899 )
        Having my system stop dead for 60 seconds on every boot just because one of the NICs is unplugged (so DHCP isn't answering) is really annoying.

        hell while you're at it, why not start paying attention to whether or not an ethernet cable is even plugged in in the first place?

        windows has been able to re-start DHCP automatically if you unplug and plug back in a cable for years and years now, why can't linux?

        easily my biggest pet peeve.

        • Re: (Score:2, Informative)

          by Seq ( 653613 )
          Any distribution that ships with Network Manager (such as recent Ubuntu versions, I'm assuming suse as well) do this. For wireless, it may only works for interactive graphical sessions, but should work without an interactive session for a wired connection (on my desktop I have a network connection without logging in)

          Also, ifplugd is extremely simple to install and requires extremely minimal configuration. I used this in college for a number of years without issue before all this fancy new stuff.
      • by npsimons ( 32752 ) *

        It seems to me that it would be worth the trouble to mechanize startup so that each step is isolated from all the others and knows which previous step it's dependent on and waits for only that step, while everything else cruises ahead in parallel.

        There is work being done on this already. I can't remember specific links right now (googling turns up some interesting [linux.com] links [csiro.au]), but I remember I first heard about it on Planet Debian [debian.org], an RSS feed collector for Debian developer's blogs; I've found some very inter

    • by kisielk ( 467327 )
      You apparently haven't used any machines with a good number of devices being initialized by the BIOS. Particularly machines with multiple network interfaces that support PXE booting, RAID and SATA controllers, and some other random devices can take over a minute just to get past the initial BIOS initialization. I manage several servers which take just long to get to the bootloader as they do to actually boot the OS. This gets tedious quickly if you're doing some work on the OS (think kernel updates and test
      • by cdrguru ( 88047 )
        Sadly, I think you are misinformed. Greatly. None of the code that you are waiting around for is in the BIOS. It is all in BIOS extensions that are called blindly by the BIOS because of address slot assignments dated from the IBM PC.

        So, if you want your RAID to work it's BIOS extension needs to be called. The fact that you replace the BIOS with something else isn't going to change that requirement one little bit. Now you might be able to work around this if you didn't boot from the RAID and did not acc
    • how many of us really have the wherewithal to make an architecture jump from x86?

      Um, anyone running Debian [debian.org]. I recently changed (painlessly) from x86 to x86-64 (AMD64), but I'd be just as happy if the hardware were cheap and easily available to go to Sparc, Alpha, PPC or ARM.
    • Some people say changing the BIOS around is like rearranging the deck chairs on the Titanic. That's not true; the BIOS isn't sinking. In fact, the BIOS is soaring; if anything, it's like rearranging the deck chairs on the Hindenburg.
    • The majority of boot time is spent initializing drivers and bringing the system to a usable state. The 3 seconds it takes for the BIOS to init the disk, locate the MBR, load the bootloader, and jump to it is negligible compared to the tedious hardware scanning and initialization done by the OS itself when it is finally loaded by the bootloader.

      Hmmm... You must not have heard of ACPI. Every PC I've used that supports it takes more than 5 seconds to start loading grub, and some take a lot longer because of the timeouts for entering the BIOS configuration utility.

      If you want to speed up the boot sequence, take a look at cutting the number of attached devices down to the bare minimum. Don't start any services during init. Do as little as possible to get the system to its usable state and you'll have minimized the boot time. Unfortunately, technology just doesn't work that way. System requirements (of both a hardware and a software nature) will require that you perform extra initialization at boot time, so any possible gains are already offset by the increased load.

      We already have bootchart. It works, and can be used to configure a system that spends less time loading services than in the bios-controlled hardware probing.

      The neat thing about stuff like LinuxBIOS and flash-based booting is that your system can theoretically send out a DHCP request be

  • Why not EFI? (Score:3, Insightful)

    by Anonymous Coward on Wednesday October 10, 2007 @01:33PM (#20929175)
    Why not use EFI [wikipedia.org]?

    It does what you want and has been in desktop computers (Macs) for over a year now.

    • by seebs ( 15766 )
      When I originally wrote the article (October 2005), I wasn't aware of EFI. (That article got delayed MUCH longer than usual in the editing process, for reasons beyond anyone's control.)
    • Re: (Score:3, Insightful)

      EFI is a specification, not an implementation, where the core pieces are still controlled (And _never_ opened up) by vendors and is usually still a big wad of real mode assembly that nobody wants to touch. There is no 100% open-source EFI-compliant BIOS implementation. The specification alone for EFI is over 1,000 pages.

      To top it all off, to even begin development on stuff like Tianocore you need to agree to draconian licensing terms such as: "You acknowledge and agree that You will not, directly or indirec
    • I've dealt with EFI on Itanium systems. Fast isn't the adjective I would associate with EFI.

      In fact, until the damned OS boots, fast doesn't really go with Itanium. After the boot, yeah, sure.
    • Re:Why not EFI? (Score:4, Insightful)

      by segedunum ( 883035 ) on Wednesday October 10, 2007 @02:43PM (#20930223)
      Because EFI is very much proprietary, and the subject of this article is Linux and Open BIOS.

      EFI is also pretty broken. It tries to look better than BIOS, but really it isn't. Think of ACPI (Intel brain damage, as Linus Torvalds calls it) which looked good and looked like we'd get some standard interfaces.........and we didn't because hardware was too complex, it had quirks and everybody ended up doing variations on a different theme. EFI is the same, because of course, everybody's intellectual property has to be protected. I mean, we can't just have manufacturers downloading, installing and contributing to a standard Linux or OpenBIOS, because that would be too easy, it would make things work far too well and everyone would have wonderful boot times ;-). Maybe a motherboard manufacturer will bite the bullet and implement Linux or OpenBIOS when they realise how much better it will make their hardware, and how much cheaper it is without umpteen updates.

      EFI is also an awful lot more complex than BIOS, which adds to the list of things to go wrong in terms of different implementations. At least the BIOS we have today is a boot loader - and it doesn't really pretend to be anything else (hell, you'd be crazy to try anything else with it!). Now think about how many BIOS updates we have for various boards today to fix lots of broken things, and then extrapolate that out........... It's not a pretty picture.
      • Additionally, I would add that because EFI is supposedly defining interfaces (if only it were that simple), Intel has crazy ideas of implementing drivers and shit - in EFI and the hardware! Just think of the bloody hassle we have today with drivers and hardware. Intel is crazy when it comes to these things.
    • Re: (Score:3, Informative)

      Why not use EFI?
      Because it an UEFI - and its cousin that PheonixBIOS, which now seems to be defunct (can't find a reference to it) - are part of the Trusted Computing [wikipedia.org]/Paladium nightmare. if you want TPM to lock you out of your computer or tell you how to use your computer, than so be it.

      I choose freedom.
  • by schnikies79 ( 788746 ) on Wednesday October 10, 2007 @01:36PM (#20929225)
    As the subject states, I wouldn't touch this, unless it was an official release from my board manufacturer. With a bad install or software bug, I can just re-install, but a bad bios can hose the motherboard. I might try it if someone had it running on the exact same hardware, down to part #'s for the ram.

    I'm admittedly not terribly bleeding-edge when it comes to hardware or electronics, but mucking with my bios is a no no.
    • by miscz ( 888242 )
      I wouldn't do that if my motherboard wasn't fully supported but many modern motherboards have backup BIOS that can be loaded with proper jumper setting.
    • Re: (Score:3, Informative)

      >>I might try it if someone had it running on the exact same hardware, down to part #'s for the ram.

      Fortunately, you don't need exact matching hardware to recover from a botched BIOS update if you have a socketed BIOS chip. The flash memory your BIOS is stored on can be easily removed, placed in someone else's computer with a compatible socket (It can be a whole different architecture, even), and reprogrammed with the vendor's BIOS using Linux+Windows compatible utilities such as Flashrom ( http://lin [linuxbios.org]
      • I have a desktop that I built, but I primarily use a notebook. I've yet to see a modern notebook with a socketed BIOS or even jumpered.
    • Re: (Score:3, Informative)

      by evilviper ( 135110 )

      With a bad install or software bug, I can just re-install, but a bad bios can hose the motherboard.

      No it can't.

      First, it's been several years since I saw a motherboard with a socketed Flash chip, and even then, it was only the dirt cheap OEM boards from the likes of Dell/HP/etc, while the retail boards used a socket.

      It's actually pretty easy to buy a replacement Flash chip, or salvage one from a dead system, and do a little hot-swap trick to Flash it with your current BIOS image for back-up/recovery purpose

  • Disk-on-Chip Linux (Score:5, Interesting)

    by RancidPickle ( 160946 ) on Wednesday October 10, 2007 @01:40PM (#20929291) Homepage
    If they could come up with a dedicated Linux Bios combined with a Disk-on-Chip setup, it would make an impressive little computer. Fast-on, perhaps with a drive or removable flash drive, and all updatable. It certainly could make an inexpensive box, and could be an ideal homework machine for the kids or a combo stand-alone box / terminal for offices. If the network went down, people could still work.
    • And tying that together with Google(TM) Applications (and I do mean all of them: Search, Gmail, Apps, etc) means you would be able to run a thin client for a majority of day to day tasks.
    • Re: (Score:3, Informative)

      If they could come up with a dedicated Linux Bios combined with a Disk-on-Chip setup, it would make an impressive little computer.


      Yeah, but who would do such a thing? http://www.phoronix.com/scan.php?page=article&item=870&num=1 [phoronix.com]
    • Not needed. (Score:3, Interesting)

      by WindBourne ( 631190 )
      My Home server boots from a Sandisk 4G CF drive. Speedwise, it is blazing. The mounts are a bit different. / is on the drive, while /home, /opt, and parts of /var (such as /var/logs) are on HDD. Roughly, any directory that varies is put on HDD. Next year, I will buy another CF only it will be 8G. By then, the price will be much lower, and the speeds increased.
  • by vil3nr0b ( 930195 ) on Wednesday October 10, 2007 @01:41PM (#20929305)
    I have repaired clusters for the last two years and most have OpenBios. These are the likes: 1)Fast as hell!! 2)Easy to change options 3)Can mount the file to a disk, edit, and then replace. 4)Errors can be determined by watching console, No video needed. One serial cable, One laptop=priceless. 5)Free
  • From TFA:

    Of course, the exact meanings of these codes vary from one BIOS to another, and only some vendors document them. Luckily, the open source vendors are very good about documenting them.

    Wow!! - Who are these open source people who are "very good about documenting" beep codes?? Any chance of deploying them to all the other open source projects out there?? - they could sure do with the help!
  • by asphaltjesus ( 978804 ) on Wednesday October 10, 2007 @02:04PM (#20929603)
    Why? Well, Trusted Platform Computing needs to start on the BIOS level in order to maintain a trusted environment. If motherboard manufacturers actually move to an always-on TPM, then OSS developers may be locked out of newer hardware.

    The mobo manufacturers will love the price versus commercial tpm and thereby limiting tpm deployment.

    That's why getting involved with these projects in particular is essential to everyone who understands the importance of computing Freedom and overall innovation.
  • by seebs ( 15766 ) on Wednesday October 10, 2007 @02:04PM (#20929607) Homepage
    No, I don't know that much about what's happened in the field in the year and a month or so since this article went up, a month or so after I wrote it. I've been busy.
  • Flash Hibernate (Score:3, Interesting)

    by Doc Ruby ( 173196 ) on Wednesday October 10, 2007 @02:17PM (#20929841) Homepage Journal
    I want my PC to hibernate to flash, storing an image that requires only the slightest update to reflect network state, time, and a few other counters. And all apps to store their state so they can be "rebooted" to flush memory leaks, but return to their highlevel state.

    That would give instant-on that's great for mobiles, but also good for desktops. Why is that so hard? Isn't hibernating to flash with a little update a lot easier than rewriting the BIOS?
  • by Skapare ( 16644 ) on Wednesday October 10, 2007 @02:40PM (#20930169) Homepage

    One major reason a PC is so slow to boot is the totally free-wheeling nature of attached devices. There's actually too much liberty to do bad things in device hardware. In some cases, probes to see if a certain specific device is present can cause some other device to go into a locked up state. PCs also have the complication that interrupts don't really identify the device in the same terms as how you access the device. This means we have to do things like timed waits in device probes. Ideally we should be able to discover all the devices in a computer within a millisecond for as many as 100 devices.

    We need a whole new system level (as opposed to CPU level) architecture. We need to have a uniform device address range for all devices, and a uniform set of basic commands for all devices. Then all devices in the same class (storage devices are one class, network interfaces is another class, etc) to have a common set of commands to operate the normally expected functions of that device class.

    And we really don't need a BIOS, or at least not much of one. A simple switch that lets us select between 2 flash areas to load at reset or power on would handle almost all cases. And even that's not necessary if we choose to run a stripped down boot selector program from flash that lets us select other flash areas to load. That combined with a hardware based "JTAG over USB" protocol to store new flash images when no present ones work (maybe when an on-mainboard or rear-access switch enables it) would provide any needed recovery capability.

    And why can't we have gigabytes of flash? I bought a 2GB SD card the other day for $20. Can't they put that on the mainboard? An SD slot would not only provide for a lot of capacity (way more than what you get on a CDROM), but also a means to stop writing, and a means to swap out bad flash or reload it in another computer.

    I have been working on a description document for a new architecture. It's not ready, yet, or I would post it here. But I'll try to speed it up.

    • My main bone of contention with the bootup checks is that they test for somethign new where 99% of the time such 'new' doesn't exist. Once a box is stable, all that will go in and out is USB devices and the odd CD or DVD, so it would immensely speed things up if we could register the device status somewhere and thus get rid of all this useless probing.

      We're running machines that are clocked in the GHz, yet bootup is still no faster than an ancient 80386 at 25MHz - despite Linux BIOS demonstrations that wer
    • by Todd Knarr ( 15451 ) on Wednesday October 10, 2007 @04:00PM (#20931407) Homepage

      That depends on the hardware. If you have to deal with legacy ISA devices, yes. Anything in the last 5 years or so doesn't have an ISA bus. The PCI bus has a defined way for devices to identify themselves and what I/O addresses and interrupts they need. USB similarly has a defined way to determine what's on the bus. Since the BIOS itself controls things like on-motherboard serial ports, it already knows which ones it's turning on and where they go. So basic initialization should be relatively quick and easy.

      Frankly the only things the BIOS should need to do with modern OSes is to reset the hardware and provide the basic I/O interface to the disks, screen and keyboard that any boot loader's going to need (so the boot loader doesn't need drivers for video, USB vs. keyboard-port keyboards, etc.).

      Alternatively, the BIOS should initialize all hardware, assign all interrupts etc., and the OS should simply take what the BIOS gave it. But IMO having the BIOS do only the minimum required and leaving the bulk of the work up to the OS gives more flexibility and resilience in the face of hardware changes or failures.

    • Re: (Score:3, Interesting)

      by evilviper ( 135110 )

      An SD slot would not only provide for a lot of capacity (way more than what you get on a CDROM), but also a means to stop writing, and a means to swap out bad flash or reload it in another computer.

      The cost of the hardware needed to support an SD card slot (fully, in hardware-only, before POST) would be more than the cost of a lot of ($40) low-end motherboards.

      Directly wiring a Flash chip to the memory space is MUCH cheaper, and what's more, the most basic socket for the Flash/CMOS has all the same advantag

  • by gstoddart ( 321705 ) on Wednesday October 10, 2007 @02:53PM (#20930357) Homepage
    Speed boot: (noun) What we water ski behind in Canada.

    Thanks, I'm here all week. Try the veal. :-P

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...