Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Debian Operating Systems Software Linux

Ubuntu 9.04 Daily Build Boots In 21.4 Seconds 654

Pizzutz writes "Softpedia reports that Ubuntu 9.04 Boots in 21.4 Seconds using the current daily build and the newly supported EXT4 file system. From the article: 'There are only two days left until the third Alpha version of the upcoming Ubuntu 9.04 (Jaunty Jackalope) will be available (for testing), and... we couldn't resist the temptation to take the current daily build for a test drive, before our usual screenshot tour, and taste the "sweetness" of that evolutionary EXT4 Linux filesystem. Announced on Christmas Eve, the EXT4 filesystem is now declared stable and it is distributed with version 2.6.28 of the Linux kernel and later. However, the good news is that the EXT4 filesystem was implemented in the upcoming Ubuntu 9.04 Alpha 3 a couple of days ago and it will be available in the Ubuntu Installer, if you choose manual partitioning.' I guess it's finally time to reformat my /home partition..."
This discussion has been archived. No new comments can be posted.

Ubuntu 9.04 Daily Build Boots In 21.4 Seconds

Comments Filter:
  • by ani23 ( 899493 ) on Wednesday January 14, 2009 @06:19PM (#26456609)
    boots 3.1 seconds faster with ext4 over ext3
  • reformat? (Score:3, Insightful)

    by doti ( 966971 ) on Wednesday January 14, 2009 @06:19PM (#26456625) Homepage

    Converting an ext2 file system to ext3 takes a simple command, that runs instantly. It basically just add a flag that enable journaling.

    Will ext4 be so different that it will not be possible to convert without reformatting?
    That's would be a pain for the half-terabytes partitions we have today.

  • by trouser ( 149900 ) on Wednesday January 14, 2009 @06:22PM (#26456675) Journal

    They've shaved 10 seconds off the boot time? In a typical working week that buys me 50 seconds more work time. I'll be so much more productive.

  • by Cyberllama ( 113628 ) on Wednesday January 14, 2009 @06:24PM (#26456711)

    So in order to be a "visionary", I merely have to decide what consumers might want (not that hard being one yourself), and then ask people smarter than yourself to make it happen with no actual technical insight on how to make it happen yourself?

  • Who cares? (Score:4, Insightful)

    by bigredradio ( 631970 ) on Wednesday January 14, 2009 @06:27PM (#26456773) Homepage Journal

    I don't mean to troll, but I could care less about boot up times. What I care about is uptime!

    With Windows, you are always having to reboot the system due to everything from software installs to changing a network connection.

    On Linux, I never have to reboot. Basically my desktop stays on unless I am taking a long weekend. I understand that efficiency is good, however, a fast boot-up does not seem like news to me.

  • by VinylRecords ( 1292374 ) on Wednesday January 14, 2009 @06:31PM (#26456833)

    What exactly is the definition of boot?

    When I start up my IBM ThinkPad (1.5ghz single processor, 512RAM, garbage video card) running Windows XP, it takes roughly 10-15 seconds to get to the user log-in interface from the moment the power button is pressed.

    But, once you log in, you are talking two to three minutes where background applications and processes are opening, explorer is loading, and applications that launch at start are loading.

    After you log in does that time count as boot time? Considering it takes me 10-15 seconds to get to the sign in screen, not that much time, but after logging in it takes well over two minutes for me to be able to actually run anything at normal capacity.

  • by Tubal-Cain ( 1289912 ) * on Wednesday January 14, 2009 @06:31PM (#26456835) Journal
    If it boots in less that 1/3 to 1/6 as much time as ext3... Surely there will be an improvement in overall performance?
  • by Anonymous Coward on Wednesday January 14, 2009 @06:33PM (#26456881)
    You understand journaling only applies to writes? Sure, there's some log file writes on startup, but not much. Oh, and ext4 should be better about keeping things defragmented, which could explain the difference.
  • Re:Who cares? (Score:3, Insightful)

    by bigstrat2003 ( 1058574 ) * on Wednesday January 14, 2009 @06:38PM (#26456953)

    With Windows, you are always having to reboot the system due to everything from software installs to changing a network connection.

    No, you aren't. This hasn't been true since Windows XP, at least. I can get uptimes of months at a time on my Windows box, the only time it comes down is for hardware changes or OS updates.

  • by RulerOf ( 975607 ) on Wednesday January 14, 2009 @06:39PM (#26456981)

    8 years development and we're still ass-whipped by 90s technology. Way to go....

    I know you jest, but this almost amazes me too. On the converse side of it, ever try booting Win98, 95, or 3.1 on modern hardware?

    Even as a person who shuts his computer down when he leaves it for more than a couple hours, while I would welcome faster booting machines, I'd prefer faster logons. Bootup in less than 30 seconds, which I've even got with Vista, is fine with me.

    Logon, on the other hand, is finished whenever your machine decides to do all those things you told it to do that got queued up while you were waiting for it to become responsive, and is much, much more annoying.

  • by Kent Recal ( 714863 ) on Wednesday January 14, 2009 @06:47PM (#26457109)

    Sorry to break it for you but boot time is measured from the push of the power button to a usable desktop.
    You may enjoy your 26 seconds of pretending that "this is not really happening" - most other people don't.

  • Could care less? (Score:2, Insightful)

    by Finallyjoined!!! ( 1158431 ) on Wednesday January 14, 2009 @06:48PM (#26457131)
    Idiot.

    The expression is "I Couldn't care less" i. bloody e. You are expressing utter contempt, there is nothing below whatever you are professing not to care about.

    "I could care less": Yes this is almost, maybe 30%, possibly 73.2%, up my list of hates, apart from purple feathers though, but then I'm a bit airy-fairy anyway.

    PS

    With Windows, you are always having to reboot the system due to everything from software installs to changing a network connection.

    Only if you don't know what you are doing, I'm actually posting from a winXP machine, which has been up since April 2004, because updates are turned off, all paths to Microsoft are blocked by my firewall, I don't install stuff I don't need & use Opera & Eudora.

  • Re:Who cares? (Score:3, Insightful)

    by Quarters ( 18322 ) on Wednesday January 14, 2009 @06:52PM (#26457181)
    You haven't had to restart Windows due to a networking configuration change in almost 7.5 years. You haven't had to restart Windows due to a driver change for almost 3.5 years now. Please get your facts correct.
  • Re:Who cares? (Score:5, Insightful)

    by sofar ( 317980 ) on Wednesday January 14, 2009 @06:55PM (#26457217) Homepage

    server maintainers care, because people pay them a ton of money to get a guaranteed 99.999% (extreme case, like NY stock exchanges etc.) or more uptime. That's only 5(!) minutes of downtime a year, and if you can boot in 5 seconds (and lets say shutdown in 5 as well), you can reboot 30 times a year for security updates. If you reboot in 30+30 seconds, that's only 5 reboots.

    imagine having a scsi raid array which takes 1 minute to initialize. a 20+20 boot+shutdown time would give you barely 3 boots per year. A 5+5 boot+shutdown almost gives you 5 reboots in the same time.

    you care for netbooks. The batteries are small, if you waste one minute at boot, and a minute at shutdown, at which the cpu and ssd (or worse: hard disk) are working hard, you lose two minutes of battery time, which translates into 5+ minutes idle or browsing the internet time. Reboot your netbook to quickly send a blog update from the airport a few times, and you've lost half an hour or effective work time.

    bottom line: shorter boot (and shutdown) means more _net_ work time available, for both a/c connected and mobile devices.

  • by DragonWriter ( 970822 ) on Wednesday January 14, 2009 @07:08PM (#26457447)

    At this point, tweaking filesystems to accommodate not-really-random-access media seems like backwards thinking.

    Over the next couple of years, SSDs performance benefits : price premium ratio may increase to the point where they are usually the primary and often the only drive on new desktops and laptop systems, but Linux is more than an operating system for the the newest desktop/laptop hardware. Its also for servers, and older hardware, and...

    And, of course, ext4 is hardly the only supported filesystem.

  • by geekmux ( 1040042 ) on Wednesday January 14, 2009 @07:18PM (#26457665)

    This is one of my pet peeves: why can't computers boot in a second or less?

    Cripes, I'm all for innovation, but damn, if you're literally counting the half-seconds sucked from your obviously insanely demanding lifestyle waiting for your current OS to boot up, then what the hell are you doing reading Slashdot? ;-)

    Hell, while we're on the topic of the damn-near unobtainable, I'd simply settle for true open-document standards, and a pop-up free Internet. Give me that, and I'll go get another cup of coffee while I wait for my OS to boot.

  • by Bryan Ischo ( 893 ) * on Wednesday January 14, 2009 @07:25PM (#26457787) Homepage

    Why does it take so long to discover those drives and other devices? Why does a CD-ROM drive take hundreds of milliseconds to be recognized during a POST? These things should happen basically instantly at modern hardware speeds, and yet they don't.

    It reminds me of NFS timeouts. Years ago when I worked in an environment where everyone NFS mounted a shared filesystem, there would occasionally be outages on the server or in the network. My local system would lock up and hang for MINUTES while it timed out on requests to the NFS server. I could never understand why the thing didn't just time out in seconds rather than minutes. Even at that time, we were running 10 MBit or maybe 100 MBit network connections; if the remote system is going to respond, it's going to happen at MOST after a few second delay. Waiting for minutes just seems dumb.

    The same sort of thing happens alot with web browsers too that wait far too long for servers to time out. If the server doesn't respond in 10 seconds, it's not going to respond. Ever. There's no reason to wait 30 seconds or longer to timeout an HTTP connection ...

  • by rnentjes ( 835795 ) on Wednesday January 14, 2009 @07:26PM (#26457813)
    Off course the geek doesn't get any money, any random geek would have come up with a working implementation when asked the right questions. The money is in the questions here, not the answers.
  • by pz ( 113803 ) on Wednesday January 14, 2009 @07:26PM (#26457817) Journal

    But why does it take more than a few milliseconds to discover a device? I've never understood this. We have had CPUs that have had sub-microsecond execution cycles for DECADES now, and yet the timeouts for communicating with devices are still measured in seconds. Why?

    Device discovery should take no more than a few milliseconds for an entire machine, with the possible exception of disk drives which presumably need to spin up and verify correct operating speed to report back on a self-check.

  • by bucky0 ( 229117 ) on Wednesday January 14, 2009 @07:44PM (#26458081)

    You'd have to have some sort of auto-login setup, but it'd disingenuous to call your PC booted when it's just sitting at the login screen. On my ubuntu box I'd estimate a good 50% of my boot time is after the login screen before I'm able to do what I wanted to do.

  • by bloodninja ( 1291306 ) on Wednesday January 14, 2009 @07:56PM (#26458267)

    Fire up the system without the proprietary Nvidia blob and file bugs! Ubuntu, like all FOSS software, needs _you_ to file the bugs so that the kinks can be worked out. Do not assume that what you see is what the devs see.

    You like Ubuntu and FOSS? Great! Help make it better.

  • Re:Who cares? (Score:1, Insightful)

    by Anonymous Coward on Wednesday January 14, 2009 @07:57PM (#26458281)
    I call that bullshit! I didn't have to reinstall Windows after I installed OpenVPN for example, which creates a "fake" network connection. The thing about the restarts is mostly the fault of the applications, because all software vendors write apps with one thing in mind: "Windows = desktop OS," which means the user can reboot as often as they want. Then we go around bitching that Windows sucks because you have to reboot. Damn it man, that thing about the restarts ended almost a decade ago, when we got Win 2k.
    It's ignorants like you that give the OS a bad name, it's rarely the actual software that's built into it. There are server editions of Windows, you know... There's Windows Server 2003 which kicks Linux's ass big time, but nooooooohohohooooo, you're not thinking about using it, because Win 95 sucked, so why bother and try something new when you can just whine about the poor overall quality of MS products by giving examples from what a single product you've tried a long time ago?
    You're nothing but a spoiled brat! [yeah, ad hominem, but that doesn't mean I'm not right]
  • Re:Who cares? (Score:3, Insightful)

    by drsmithy ( 35869 ) <drsmithy&gmail,com> on Wednesday January 14, 2009 @08:12PM (#26458467)

    On Linux, I never have to reboot. Basically my desktop stays on unless I am taking a long weekend. I understand that efficiency is good, however, a fast boot-up does not seem like news to me.

    Note that for most people, "having to reboot" is irrelevant, since they turn their computers off every night anyway.

    Further note that for most of those who are left, the practical difference between "reboot" and "restart all your applications and services", is zero.

  • by pavon ( 30274 ) on Wednesday January 14, 2009 @08:16PM (#26458555)

    Almost all of those issues are from third party software.

    And it is Canonical's job to test that software and choose which version they are going to ship with. The last release of Ubuntu, all sorts of software broke on my computer that used to work before. This is their fault for choosing to package bad software.

    Also, for what it's worth, I've been having the same problem that he is having with Flash when using Gnash and swfdec as well. It seems like ndiswrapper has some issues in the latest Ubuntu that were not a problem in previous releases, beyond the fact that the flash plugin sucks.

  • by freddy_dreddy ( 1321567 ) on Wednesday January 14, 2009 @08:57PM (#26459093)
    This is exactly why I left the Linux scene and consider it a failure.
    Filing bug reports isn't answered with a solution or bug fix, but with one of these:
    - bug report already exists
    - You're doing something wrong, it's not Ubuntu/Linux
    - It's your hardware, not Ubuntu/Linux
    - It's because of these evil hardware companies, not Ubuntu/Linux
    - You have the source code, fix it yourself
  • by 5865 ( 104259 ) on Wednesday January 14, 2009 @09:22PM (#26459401)
    "Premature optimization is the root of all evil" - Donald Knuth
  • by vux984 ( 928602 ) on Wednesday January 14, 2009 @10:03PM (#26459907)

    First, the main thrust of my post was really to comment that getting to the desktop BEFORE everything else is running is a victory simply not worth fighting for.

    I.e.deferring things to startup AFTER you arrive at the desktop to give you the appearance of a faster boot time is pointless if you need those things to actually use it... or even if the fact that those things are still starting up is pegging your cpu/hard drive making it essentially unsusable even if you aren't loading something dependant on the items still loading.

    That said...

    The entire distribution is 50 MB and it includes network, Gui, etc... Based on your numbers above we should be able to load this entire distro into Memory within a second or two, maybe another 2-3 additional seconds if you want to add a 3d desktop like Compiz.

    I think potentially, yes, this is theoretically possible. There is a laptop out there, for example, with an instant on Linux distro flashed into the BIOS, that you can use to quickly browse the web etc, without having to boot up the OS off the hard drive.

    http://www.itnews.com.au/News/77281,asus-laptops-to-offer-instanton-linux.aspx [itnews.com.au]

    So this absolutely -can- exist. I'm not sure just how instant, instant-on is here, but it sounds like its in the 3-5 seconds range.

    Other services could potentially be loaded in the background after the login screen and/or desktop are available.

    I think this is a bad idea. See above, for why.

    I see little reason why an OS like Ubuntu can't reduce boot times down to the sub 10 second range with a little work. It's all about scheduling.

    Sure, I agree 10 seconds is quite concievable conceivable.

    However much beyond that and I think coping with querying the hardware itself will take longer than that. Just querying all the buses etc to make sure nothing has changed will probably take a few seconds.

    If the OS has to do a bunch of initializations every time it start up, why cant it just do a memory dump after those initializations, then only load the ones that change every time the computer starts?

    Why bother reinventing the wheel? We ALREADY have "suspend to RAM" and "suspend to Disk" and that is basically already what it does. Trouble is, the device drivers have to support it for it to work properly. And it turns out that, for suspend to disk at least, that reading in the big ballooned out memory image to and from disk is usually SLOWER than just booting clean because of all the extra data involved.

    And on top of that you STILL have to wait for a pile of device initialization because simply loading in your network/video/audio/etc driver to a particular ram image state doesn't do a thing towards actually putting the network/video/audio/etc device into a suitable state.

    (This is in fact precisely why you need a dedicated protocol to communicate you are going into and out of suspend and device driver support for it.)

  • by Eil ( 82413 ) on Wednesday January 14, 2009 @10:24PM (#26460149) Homepage Journal

    You know how everyone wanted a Linux-based operating system that "just worked" on a wide variety of hardware with drivers for everything? And didn't throw a shit-fit if you moved the hard disk to a completely different machine and tried to boot it up?

    That's why Linux takes so long to boot these days. You can have very good hardware compatibility or you can have very good boot speed. You can't have both. (Well, until someone invents persistent RAM.)

    Why does it take so long to discover those drives and other devices? Why does a CD-ROM drive take hundreds of milliseconds to be recognized during a POST? These things should happen basically instantly at modern hardware speeds, and yet they don't.

    The CD-ROM does respond to the BIOS very quickly. What takes forever is the BIOS checking each controller, chain, and bus location for a device. Waiting for those probes to time out is what takes so long. This isn't just the BIOS either, it's the Linux kernel too and any OS that might want to speak to whatever hardware might happen to be there.

    . Even at that time, we were running 10 MBit or maybe 100 MBit network connections; if the remote system is going to respond, it's going to happen at MOST after a few second delay. Waiting for minutes just seems dumb.

    Seems dumb to you, the user. Didn't seem dumb to the programmers who wrote NFS and whatever application you were using. Why? NFS is 1) a block device, and 2) largely a hack. The way UNIX was designed, block devices just don't disappear from the system. Just like wheels (ideally) don't go flying off your car while you're driving down the road. But when NFS, a block device can suddenly go unavailable and as far as the OS is concerned, that's just really really bad for all sorts of reasons. The programmers figured that in order to make the system as robust as possible, they'd extend the timeout as long as tolerable to reduce the chances of data loss and corruption as much as possible. It's conceivable that a large number of problems could be resolved in a matter minutes (say, somebody tripped over the power cord for the network switch), thus preventing the loss of what could be very valuable data.

    The same sort of thing happens alot with web browsers too that wait far too long for servers to time out. If the server doesn't respond in 10 seconds, it's not going to respond. Ever. There's no reason to wait 30 seconds or longer to timeout an HTTP connection ...

    You click a mouse button. This initiates a request which, after all of the appropriate nameservers have been consulted, hops from your machine over dozens of routers, switches, and cables owned by different countries and corporations. It travels thousands of miles away to some place you can't even pronounce. Once there, the server recognises the request and acts on it, sending you back a mix of static content, images, and database content several orders of magnitude greater in size than your original request. The content then travels back to you another few thousand miles, perhaps via a different path until it eventually reaches your machine where it is processed and displayed in a mostly-legible fashion. And you have the gall to complain that sometimes it takes longer than 10 seconds for all of this to happen?

    Good. Fucking. Grief.

    I'm continually amazed that it works at all and I'm a sysadmin at a web hosting company. Almost every day I run across a site I want to visit that takes longer than 10 seconds to respond in full. There are lots of very good reasons that a website might take between 10-30 seconds to load in your browser. The authors of the HTTP protocol, web server software, and web browsers having a personal grudge against you sure isn't one of them.

  • Re:Who cares? (Score:4, Insightful)

    by PitaBred ( 632671 ) <slashdot&pitabred,dyndns,org> on Wednesday January 14, 2009 @10:33PM (#26460235) Homepage
    If you're guaranteeing 5 nines, you'd be stupid to be using a single machine. Update a test environment, verify it works, then take down your cluster a machine at a time updating each one. No downtime if you do it right, that way you can "bank" your downtime to deal with network outages and such that are outside your control.
  • by icydog ( 923695 ) on Thursday January 15, 2009 @12:30AM (#26461325) Homepage
    2^25 is around 33 million. Surely 33M cycles isn't a second these days?
  • by fractoid ( 1076465 ) on Thursday January 15, 2009 @01:11AM (#26461731) Homepage
    And sadly, if a 1000-to-1 bet has 1000 punters, chances are that one of them will win it. And will be called a visionary, and people will crap on about how he was amazingly insightful, and a genius, and all that, when in actual fact he just happened to be the one chump who got lucky.
  • Hey, we try. (Score:3, Insightful)

    by Grendel Drago ( 41496 ) on Thursday January 15, 2009 @01:15AM (#26461755) Homepage

    Nearly everyone else working on it is a volunteer doing it in their spare time. We're working on it, I assure you. If a bug report exists, that's important to know. If there's a workaround, it may still be that there's a usability issue and that's valid. If it's a problem with your hardware, what on earth do you expect them to do about it? If you can live without your shiny 3D eye-candy (or buying an Intel graphics card), you don't run into the evil-hardware-company issue.

    And lastly, the quickest way to fix an issue is to provide a patch. That's not really fair, but, given that you're not paying anyone for the software, that's the way it is. (That doesn't mean that someone who tells you that the only reason you're not a happy user is that you haven't written enough patches isn't a tremendous jerk.) I've gone from filing bugs, to confirming and testing them, to writing my own patches and testcases. It's rewarding, in its own way, to make the system better, bit by bit.

    Honestly, the situation on the Ubuntu tracker isn't that bad. Yes, there are still people who drop into ignored bug reports, ask "Is it still present?" and set the bug to expire if someone doesn't write back that, yes, the bug is still present in the current version, as (in plenty of cases) the owner could see if they just took five minutes to test it. Yes, there's no good way to escalate a bug or get it triaged with a quickness, even if it's something that's really damned important. Given how bad things are at the GNOME bugzilla (bugs wait forever there), I'm pleased in comparison.

    Given all this, it's understandable that Linux isn't for everyone. Hell, look at the state of audio support. It's a damned tragedy [adobe.com]. You have to really love it at this point. I'm motivated by the fact that it's worlds better than it was only a few years ago: suspend/resume actually works sometimes, a major vendor (Intel) actually maintains bleeding-edge open-source video drivers as part of the X.org distribution, and there's a lot more polish on things--all the little usability details that sound like nitpicking when you enumerate them, but add up to a good or bad user experience, in the final evaluation. You may have left Linux--and, really, if you're not willing to put up with quite a bit at this point, it's not for you--but it's not a failure.

  • by swillden ( 191260 ) <shawn-ds@willden.org> on Thursday January 15, 2009 @01:31PM (#26468731) Journal

    If it boots in less that 1/3 to 1/6 as much time as ext3... Surely there will be an improvement in overall performance?

    I seriously doubt the major factor in boot time improvement is the file system. They're also continuing to work on Upstart, their replacement for the SysV init daemon, and one of Upstart's primary goals is to increase parallelism in the boot process. The traditional boot process is quite linear and as a result spends a lot of time waiting around.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...