Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
IBM Software Linux

Booting Linux Faster 625

krony writes "IBM's DeveloperWorks explains how to decrease boot times for your Linux box. The concept is to load system services in parallel when possible. Most surprising to me is the use of 'make' to handle dependencies between services." The example system shown is able to cut its boot time in half, but the article stresses the effectiveness can vary widly from machine to machine.
This discussion has been archived. No new comments can be posted.

Booting Linux Faster

Comments Filter:
  • Hmm (Score:2, Interesting)

    by Arker ( 91948 ) on Thursday September 18, 2003 @06:53PM (#6998869) Homepage

    I guess someone has a use for this, or they wouldn't have spent the time working on it. But I don't see it.

    I never noticed Linux taking very long to load, and even if it did I doubt I would care very much, as reboots are so rare anyway.

  • Dual (or more) cpus (Score:2, Interesting)

    by Alizarin Erythrosin ( 457981 ) on Thursday September 18, 2003 @06:54PM (#6998878)
    This sounds like an awesome reason to have multiple cpus. Maybe I missed it in the article (I did read it rather quickly), but it didn't look like it was mentioned.

    Even on an HT-enabled P4 this would be cool. Although the I/O would be the limiting factor in the process startup speed, letting multiple proceses start up at once would allow the cpu to switch to others while I/O is being services, much like make -j(# of cpus+1).
  • by Concerned Onlooker ( 473481 ) on Thursday September 18, 2003 @06:58PM (#6998918) Homepage Journal
    It's just human nature. Douglas Adams wrote in "Last Chance to See" that he would glady spend an hour working on a way to save himself ten minutes on the computer.
  • Isn't there a way (Score:3, Interesting)

    by SHEENmaster ( 581283 ) <travis@uUUUtk.edu minus threevowels> on Thursday September 18, 2003 @06:58PM (#6998920) Homepage Journal
    to reboot without rebooting, such that uptime remains the same but kernel upgrades can take place?

    I remember reading about it somewhere, but it was skimpy on details, sufficing to say that it was a "bad idea".
  • Re:Hmm (Score:5, Interesting)

    by Obasan ( 28761 ) on Thursday September 18, 2003 @07:13PM (#6999055)
    As someone who uses linux on a laptop, running SuSE 8.2 I *DEFINITELY* have a use for this. I use my laptop in a professional capacity to do quite a lot of things, and while I can run on batteries I do generally turn it off and on at least a couple times a day. Further - because I am occasionally forced to dual boot, sometimes that can be even more often. It is a good 3-4 minutes between power on and KDE desktop. This is on an 800mhz P3 with 512 megs of RAM.

    Do I want a faster boot?

    You bet your ass I do.
  • by Sleepy ( 4551 ) on Thursday September 18, 2003 @07:17PM (#6999085) Homepage
    This was three years or more ago, but I remember one of the PPC Linux developers "converted" all his system boot scripts in init.d to compiled C.

    Boot times went from about 2 minutes, to 35 seconds.

    (It took "so long" because it was an old PPC 601 60MHz or something like that).

    Distributions such as Mandrake and Gentoo claim they go the extra mile for "performance". I've wondered why neither has cleaned up their boot process.

    You wouldn't think Bash is slow from interactive use, but it really it. Piggyback on that speed problem that too many "functions" (OK, *commands*) are standalone executables... greate sub-process, collect result, destroy, rinse repeat.

    This is pretty interesting stuff, and I applaud this guys efforts. INIT script achitecture is pretty thankless stuff.. .no "glory". Fixing this would be like someone fixing fdisk... no one wants to touch the damn stuff...

  • by oever ( 233119 ) on Thursday September 18, 2003 @07:22PM (#6999123) Homepage
    These are all good suggestions. There are two reasons one'd like to reduce boot times:
    • You're running an important server
    • You're booting your computer daily because you only really use it 1/3 of the time and don't want to waste electricity.

    If the second reason is your main reason to boot quickly, you'll probably want to start X too every time you boot. So waiting to start X until a user says X should be started is no option. Your other suggestions are spot on.

    If you'd like to take away the last, big, bottleneck, it would be a good idea to start X in parallel with the other, independant, services. This is exactly what's described in the insightful IBM article. Hooray!

  • by MotownAvi ( 204916 ) <`moc.namssird' `ta' `iva'> on Thursday September 18, 2003 @07:24PM (#6999145) Homepage
    Bah. Mac OS X's done this since Jaguar.

    The big question is "how do you specify dependencies?" The article uses makefiles. In Mac OS X, each startup item has a properties file (associative array) that names the item and specifies all the items that it depends on (http://www.usenix.org/events/bsdcon02/full_papers /sanchez/sanchez_html/). Then SystemStarter makes a dependency graph and starts them up in parallel whenever possible (http://developer.apple.com/documentation/MacOSX/C onceptual/SystemOverview/BootingLogin/chapter_4_se ction_2.html).
  • Re:Make? (Score:5, Interesting)

    by clem.dickey ( 102292 ) on Thursday September 18, 2003 @07:27PM (#6999167)
    Not having looked at the code, it seems to me that make would only handle half the problem: booting. Shutdown is the other half; the dependencies would be reversed. For example:

    For boot you would tell make this:

    sshd: network
    rpcd: network

    But for shutdown you need to tell it this:

    network: sshd rpcd

    Ideally one set of input data should take care of both cases.
  • Re:Make? (Score:3, Interesting)

    by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Thursday September 18, 2003 @07:34PM (#6999202) Homepage
    I thought it was suprising at first. Yes, that is what make is designed to do, but I'd think most people (myself included) think of make as a programming tool. I don't think I would have thought about using make for that job, at least not at first.

    This isn't make's intended use (it was designed for programming), so it's a bit suprising to see it used this way at first.

    That said, it does make perfect sense.

  • Re:Make? (Score:5, Interesting)

    by slamb ( 119285 ) on Thursday September 18, 2003 @07:44PM (#6999272) Homepage
    > > Most surprising to me is the use of 'make' to handle dependencies between services."

    > Really? That's an odd statement. How surprising that they choose to use an open-source software application that is designed to compactly represent dependencies for representing dependencies.

    Actually, I also found it surprising, and I think I know "make" pretty well. The thing about make is that in 95% of cases almost all of the rules correspond to an actual target file that should be generated or not based on presence and timestamp. There are exceptions, like the usual "all" rule that's called a phony rule since it generates no file. (And make sure you have a ".PHONY: all" line right before it or "touch all" will break your build.) It's usually just there for the dependencies on a bunch of real targets, so you don't have to type "make this && make that && make ...".

    Parts of make that they're not using here:

    • logic for checking if a real target is up-to-date
    • rules for creating specific targets from generic ones, like the .c.o target
    • variable substitutions
    • a lot of other things...look at the man/info pages; modern versions of make have a lot of functionality that makes no sense here
    And they are using:
    • topological sort (easy algorithm!)
    • stuff for following the partial order in parallel (also surprisingly easy)
    • the parser, but it's for a widely-disliked syntax that doesn't make a lot of sense here

    When I say the syntax doesn't make sense here, I mean (in addition to the usual make complaints) that it's all in one file. Distributors (notably RedHat in particular) have been very serious about separating out stuff into .d directories so that packages don't need to touch each others' files.

    So, I think make is the wrong tool for the job here, at least in the long term. A simple tool with separate files for each service would be a win. I don't think the author of the article really cares about that (it's just a little tip for intermediate users), but if a distribution wanted to implement this idea and maintain it, they wouldn't use make.

  • by Michael Woodhams ( 112247 ) on Thursday September 18, 2003 @07:59PM (#6999377) Journal
    Something for the usability folks to think about:

    Ordinary users, and even many geeks, don't have time to figure out what every service does and whether they use it. A policy of aggresively turning off services (mostly for security, partly for boot time) carries a risk of turning of a service that is needed.

    I suggest that there should be a standard framework for dealing with "a needed service is not running" problems. On a desktop Linux, this should pop up a window explaining what service wasn't running, and giving options to do nothing, start the service on a one-time basis, or add the service to boot time start-up (and prompting for root password as required.)

    (There can be extra options - don't start the service, and never ask me again. Don't start the service, and never ask me again if this particular program complains about it.)
  • by Master Bait ( 115103 ) on Thursday September 18, 2003 @08:11PM (#6999506) Homepage Journal
    That's a pretty good idea.
    I use a bunch of homemade Xterminals made out of Nforce boards and we have replaced /sbin/init itself with an executable shell script (and use ash for the shell instead of bash). The entire contents of init is this:
    #!/bin/sh
    /bin/cat /dev/null > /var/run/utmp
    /sbin/insmod /modules/nvnet.o
    /sbin/ifconfig lo 127.0.0.1
    /sbin/mount -o remount,rw /
    /sbin/mount -t proc /proc /proc
    /sbin/insmod /modules/nvidia.o
    /usr/X11R6/bin/X -broadcast
    /bin/sh

    No shutdown script is necessary because Xterminal users simply logout and turn them off.

    I think one of the biggest slowdowns on PCs is the lame PCBIOS which takes a very long time to run through all the hardware. I remember following LinuxBIOS [linuxbios.org] development. It is so fast, that it was finished checking the computer's hardware before the disk drives finished spinning up.

  • by AxelTorvalds ( 544851 ) on Thursday September 18, 2003 @08:26PM (#6999646)
    It was a Pentium 300 Mhz class machine in an embedded device. I had the kernel in flash and used my own variation of the linuxbios project.

    My best times were power on to init in about 2.7 seconds. By the time we got the "authentication code" and what not in it was closer to 30 seconds.

    Take all that BIOS stuff out and create a truely lean and mean setup with minimal init scripts and you can blaze. Longest step was copying the kernel from slow-mo flash memory in to RAM...

  • by int2str ( 619733 ) * on Thursday September 18, 2003 @08:27PM (#6999650)
    What's wrong with Suspend/Resume? Powering off your notebook seems like a waste of battery and time if you ask me.

    I would even start to apply this to desktop machines - just suspend it, don't turn it off all the way.

    Cheers.
    Andre
  • by mangu ( 126918 ) on Thursday September 18, 2003 @08:29PM (#6999669)
    The problem was "There is a sense of snideness in the Linux community, and trying to ask for support is one of those examples of snideness."


    I think this "problem" you mention is some sort of urban legend. I have heard this same argument countless times, but I have never actually seen this happen. I have been a subscriber to a few Linux mailing lists for several years now, and I have never actually seen someone post "RTFM" as an answer to a question.

    Myself, I try to sort of evaluate the person who is asking for help. If I think he has an adventurous soul, and is willing to go through a lot of documentation, I try to orient him to the relevant how-to's. In the other hand, if I feel the person is somewhat impatient, I recommend a Mandrake installation, since it's the most likely to get the user safely past the most annoying problems with minimum fuss.

  • by Karrots ( 14012 ) on Thursday September 18, 2003 @08:39PM (#6999761)
    I know the Gentoo start up scripts can do this. Well I take that back I don't know if they actually do it. But they have a setting for it. As you can see mine is off at the moment.
    # Set to "yes" if you want the rc system to try and start services
    # in parallel for slight speed improvement.

    RC_PARALLEL_STARTUP="no"
  • by dmaxwell ( 43234 ) on Thursday September 18, 2003 @08:54PM (#6999878)
    Kernel vulnerabilities are fairly rare. A new kernel is the only thing that mandates a complete system restart when upgraded. Linux and the BSDs got this right; it's the one overall thing I don't like about OS X. At most, OS X should only have to restart the GUI and close apps for most system patches....but nnnnooooo. Sheesh guys, you trumpet "the Power Of Unix"; use some of it.
  • New boot system (Score:3, Interesting)

    by Hard_Code ( 49548 ) on Thursday September 18, 2003 @09:04PM (#6999951)
    The traditionaly Sys V init is archaic, crude, and disgusting. What, 6 hardcoded numeric runlevels? Wow, how useful is that. And I love ordering my startup scripts with two digit integers.

    *nix needs a major boot/shutdown system upgrade. I have migrated to minit [www.fefe.de], but that is primarily for low memory usage. It allows a rudimentary mechanism for specifying dependencies, but is geared mostly to be minimalistic. This 2003, I think we can come up with something better than Sys V init.

    Features of a next gen boot/shutdown service manager:

    * uses real dependency traversal on startup and shutdown (maybe using a small theorem prover like CML2, or maybe something like make)
    * allows configuration of arbitrary and unlimited sets of services, which can be named by arbitrary string literals - no longer chained to 7 numeric choices. e.g. "roaming laptop", "docked server", "minimal services", etc.
    * built-in service start/stop/restart/status/enable/disable tools, and standard service API with bindings for various languages (what, native services? imagine that...we do so for Windows NT+, e.g. apache) as well as Plain Old Shell Scripts. So every freakin' flavor/distro of *nix doesn't have its own fscking way to start/stop/enable/disable services.

    A lot of the garbage that goes on during startup (have you looked at the standard redhat scripts?) mounting drives and file systems, setting network and hardware parameters, etc., could probably use being standardized also, and either pulled into drivers or services or something, in a standardized fashion. Ideally all these APIs could be exposed both through command line tools, but also through desktop-integrated GUI tools, so that modifications don't entail digging up some ad hoc script on disk and modifying it and hoping you remember what the fuck you did a year ago in some system script.
  • Re:Hmm (Score:2, Interesting)

    by dfries ( 466073 ) on Thursday September 18, 2003 @09:12PM (#7000015) Homepage Journal
    For me it is in /etc/cron.daily/find
    rm it.

    If you can stand it, add "noatime" to /etc/fstab for your Linux partitions. You will always have 'new mail' mail instead of just 'mail', but if things are just reading from the drive, the drive can actually spin down instead of having to write just to say what it read.

  • Re:Hmm (Score:1, Interesting)

    by Anonymous Coward on Thursday September 18, 2003 @09:28PM (#7000125)
    using a modern, untweaked SuSe you just have to accept this.
    have you tried removing unused services? apache-perl-php seems to be a default install option (8.3, earlier?), usually "hwscan" (on boot) is done every time (how often do you change hardware in a notebook; this is NOT pcmcia stuff); DMA enabled on disks?; optimized kernel?; have you tried suspend-to-disk|memory?
    how much time did you invest in learning (and thus optimizing) your system for your needs? linux (meaning the kernel and other drivers) are rather tolerant concerning the hardware which is THE OPPOSITE of speed-optimization. given the skills and motivation you might try to google for patches for certain chips and apply them. beware that one might use near infinite time on this task (try a lfs install, or use gentoo stage 1). this is (as always) a question of balance, for developers, distro makers and end users.

    occasionally forced to dual boot
    when i do boot into this 'other' os, i rather think of choosing to do such; seems nicer that way *g*

    concerning KDE: this old news; yes, one might call it slow.
    gnome maybe faster, or it may not. read, try, learn; there is no alternative (resistance is futile ;-)
    if you really want, try running a windowmanger without a desktop solution (afterstep?).

    if you think you have learned your linux beyond the world of yast(2), switch to another distro (as SuSeconfig is quite good at invalidating usermade changes). if you are bold, try a real os (*bsd).

    remember that you don't always get what you want, or at least not for the price you are willing to pay.
    have fun!
  • by Skapare ( 16644 ) on Thursday September 18, 2003 @11:47PM (#7000975) Homepage

    Back in 1999 I rewrote all the init scripts entirely from scratch. I did this after having spent a few years before hacking at init scripts in BSD/OS, OpenBSD, Redhat, Slackware, and Solaris. I experienced all the crankiness of these systems (Redhat and Solaris were the worst) and this time decided to avoid all that. I gave the scripts entirely different names so as not to conflict with existing scripts (was Slackware at this time). That way I could switch between them with just a change of /etc/inittab. It took a few hours, but I had a running fully functional system by the end of the day, and have been running on those scripts, as subsequently better debugged and tweaked, ever since. They booted up noticeably faster than even the Slackware scripts (which were about as fast as the OpenBSD scripts).

    Irontically, I didn't do this to get the boot speed. The init scripts are fast enough now that the kernel initialization time is longer, anyway. What I did this for was because I hated having a bunch of separate directories with symlinks in them for each run level. I didn't like having to use specialized tools to manipulate the system (I wanted to routinely use the tools I would have available if I were running from a rescue floppy trying to fix it). That meant doing things with a basic set of shell commands. Yet I didn't want to abandon having separate scripts for each service/daemon being started (or stopped as the case may be). What I ended up doing was creating a single subdirectory for all the individual service scripts, and making the script name have a pattern that included both the startup sequence (stop sequence simply ran backwards), as well as the run levels. Here's what the names in /etc/sys on my system look like:

    • 000.12345.net_lo
    • 020.--345.video_120x58
    • 040.--345.keymap
    • 060.-2345.mouse
    • 100.-2345.net_eth0
    • 101.-2345.net_eth1
    • 190.-2345.gw
    • 200.--345.random
    • 220.-2345.syslog
    • 240.--345.at
    • 260.--345.cron
    • 300.-2345.dns
    • 320.-2345.nsd
    • 400.-2345.ssh_main
    • 410.-2345.ssh_alt
    • 520.--345.inet
    • 540.-2345.ntp
    • 600.--345.mail
    • 620.--345.http
    • 640.--345.rsync
    • 660.-----.pop3
    • 700.----5.xfs
    • 780.--345.ftp
    • 800.--345.cache
    • 950.----5.xdm

    Figuring out which run level each service starts in is left as an exercise for the reader. BTW, I think most of the speed comes from the fact that I didn't add a lot of fat to my script system. That's easier to do when you do your own design.

  • by citog ( 206365 ) on Friday September 19, 2003 @12:39AM (#7001169)
    .. and it isn't the quickest to reboot either! Mind you, I don't find I'm rebooting that frequently compared to my Windows machine.
  • by Evilive ( 625934 ) <evilive@@@occultmail...com> on Friday September 19, 2003 @03:04AM (#7001672) Homepage Journal
    I did it and noticed a decrease in the boot time, probably 15-20 sec. give or take. YMMV.
  • by dwidznz ( 708889 ) on Friday September 19, 2003 @03:32AM (#7001751)
    I've seen the exact opposite approach taken - switching from parallel to serial to reduce startup times.

    The reason this worked was that starting processes in parallel increased disk contention and the extra seeks brought the machine to a crawl.

    Sure, that was on a system with very slow seek times, and not much in the way of disk caching and scheduling (an Amiga 500 with no HD). A lot of things have improved since then, but seeks are still extremely slow in machine terms, and we also have virtual memory and demand page loading which I imagine don't help the problem.

    Maybe keeping data needed at startup centralised on the disk (e.g. in the boot partition) would help.

    As disk accesses during startup are probably pretty predictable (consistent from one boot to the next), it may be possible to pre-load the disk cache to improve startup times.

    A simple approach would be to log disk blocks accessed during startup, and then read them (in a sensible order of course, and in parallel across disks) at the start of the next boot.
  • LinuxBIOS (Score:3, Interesting)

    by Taco Cowboy ( 5327 ) on Friday September 19, 2003 @04:15AM (#7001852) Journal


    I thought there is an outfit called "linuxbios" that supposed to make re-booting, especially cold-booting a very fast process.

    Can anyone here tell me the recent progress of "linuxbios" ?

    Thank you !

  • by Anonymous Coward on Friday September 19, 2003 @10:34AM (#7003688)
    That is an incorrect assumption. It is possible to patch the kernel while it is running. Also, it's very, very freaking easy to load a loadable kernel modules, especially when your kernel is linux. Some people have module support disabled though, and may have access to kernel memory disabled, so then they would be vulnerable.... unless the could use a bug in the kernel to allow write access to kernel memory.
    Anyways, my point is: don't assume

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...