Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Cloud Open Source Linux

Linux Kernel Moves To Github 142

Posted by samzenpus
from the moving-to-better-quarters-on-campus dept.
An anonymous reader writes "Linus Torvalds has announced that he will be distributing the Linux kernel via Github until kernel.org servers are fully operational following the recent server compromise. From the announcement: 'But hey, the whole point (well, *one* of the points) of distributed development is that no single place is really any different from any other, so since I did a github account for my divelog thing, why not see how well it holds up to me just putting my whole kernel repo there too?'"
This discussion has been archived. No new comments can be posted.

Linux Kernel Moves To Github

Comments Filter:
  • Great (Score:5, Interesting)

    by Mensa Babe (675349) * on Monday September 05, 2011 @09:00AM (#37307768) Homepage Journal

    I clicked the link and here's what I got: "Server Error 500 - An unexpected error seems to have occurred. Why not try refreshing your page? Or you can contact us if the problem persists." with a cute parallax scrolling animation of GitHub logo falling down the Grand Canion. I've never seen 500 error on GitHub before.

    Linus writes: "since I did a github account for my divelog thing, why not see how well it holds up to me just putting my whole kernel repo there too?"

    Why not? Because you just broke GitHub! That's why!

    And now let's all remain silent while the instant, distributed, cpu-intensive, encrypted https slashdotting of GitHub starts in 3... 2... 1...

  • by AlexiaDeath (1616055) on Monday September 05, 2011 @09:07AM (#37307808)
    Because it's gonna bite :) Its working now btw...
  • by Jose (15075) on Monday September 05, 2011 @09:16AM (#37307860) Homepage

    pfft...this is clearly a slashvertisement for Linus' divelog!

  • by Dan Dankleton (1898312) on Monday September 05, 2011 @09:27AM (#37307916)
    Has Linus changed his mind in the last week? http://article.gmane.org/gmane.comp.file-systems.ext4/27628 [gmane.org]
    • by Anonymous Coward on Monday September 05, 2011 @09:47AM (#37308008)

      No, he specifically addressed this in his post:

      NOTE! One thing to look out for when you see a new random public
      hosting place usage like that is to verify that yes, it's really the
      person you think it is. So is it?

      You can take a few different approaches:

        (a) Heck, it's open source, I don't care who I pull from, I just want
      a new kernel, and not having a new update from kernel.org in the last
      few days, I *really* need my new kernel fix. I'll take it, because I
      need to exercise my CPU's by building randconfig kernels. Besides, I
      like living dangerously.

        (b) Yeah, the email looks like it comes from Linus, and we all know
      that SMTP cannot possibly be spoofed, so it must be him.

        (c) Ok, I can fetch that tree, and I know that Linus always does
      signed tags, and I can verify the 3.1-rc5 tag with Linus known public
      GPG key that I have somewhere. If it matches, I don't care who the
      person doing the release announcement is, I'll trust that Linus signed
      the tree

        (d) I'll just wait for kernel.org to feel better.

  • by whiteboy86 (1930018) on Monday September 05, 2011 @09:34AM (#37307936)
    Linux kernel is very mature at this point, but some basic functionalities like HAL (hardware abstraction layer) are not present and not even planned. Linus is perhaps happy with the current 3.x state of Linux, but lots of people demand more.. I recently ventured to ReactOS website and have seen lots of activity in the SVN. This is maybe thanks to Google Summer of Code 2011 ReactOS involvement, lots of commits on daily basis in the trunk now, the project seams to be getting in motion again.
    • by Anonymous Coward on Monday September 05, 2011 @09:40AM (#37307974)

      Why is HAL such a good idea?

      I know that I can move a Linux installation image from one machine to another without a glitch, while Windows (which has a HAL) fails miserably if the source and destination machine vary in any non-trivial way.

      • by ledow (319597) on Monday September 05, 2011 @10:02AM (#37308082) Homepage

        Never have I had to agree with a post more.

        My employers, not particularly tech-literate, have even seen this and learned it first-hand, and have had to get themselves out of the habit that "moving that server to new hardware means configuring a new one, effectively".

        Move a Windows server - you can be in for a world of hurt unless you want to fresh-deploy it every time. Move a Windows-client, historically you'd be prepared for blue-screens because you have the "wrong" processor type (Intel vs AMD - requires disabling some randomly named service via the recovery console, for example), reinstalling the vast majority of the drivers (probably from a 640x480 safe mode) and even then can't be guaranteed to get anything back and working - not to mention activation, DRM, different boot hardware (e.g. IDE vs SATA), etc.

        Move a Linux server - unless your OWN scripts do something incredibly precise and stupid with an exact piece of hardware, it will just move over. At worst, you'll have to reassign your eth ports to the names you expect using their MAC address (two seconds in Linux, up to 20 minutes in Windows and a couple of reboots).

        Hell, you can even change the kernel entirely, or the underlying filesystem type or any one of a million factors and it will carry on just as before, maybe with a complaint or two if you do anything too drastic but almost always with no ill-effects and a 2-second resolution.

        The only piece of hardware on Linux that I have to "fiddle" is a USB-Fax modem that has ZERO identification difference between two examples of itself. You literally have no way to assign them to fax0 and fax1 except guesswork - or relying on the particular USB port name which wouldn't translate between computers. But the install has moved through four machines (from an ancient office workstation with IDE - sacrificial hardware to prove my point about its usefulness -, to a state-of-the-art server class machine with SAS RAID6 and redundant power supplies) without so much as a byte-change - just me swapping the fax modems over rather than bothering to code the change.

        And if the hardware breaks? No big deal - pull out the old machine and/or any random desktop machine (or even laptop) with enough ports, image it across byte-for-byte and carry on regardless.

        People don't get that this is a BIG feature that they should be pushing - whereas with Windows I've heard (and seen) horror stories about RAID cards not working without the exact controller/firmware/driver combo that they were setup with, blue-screens and hangs and activation dialogs when you attempt something like that, not to mention HOURS of fiddling to get the image running exactly how it was on the original machine (if that's even possible). It goes along with the "plaintext" / "plain file" backup strategy (hell, my /etc/ is under automatic version control with two commands!), etc.

        The point of an OS is to make the software independent of the underlying hardware. Windows lost that independence a LONG while ago (Windows NT / 95). Linux still has it because of the underlying design of the whole thing.

        Don't even get me started on restoring an "NT Backup" without having the exact correct hotfix/service pack setup that you were backing up from...

        • by tfigment (2425764) on Monday September 05, 2011 @11:25AM (#37308594)

          We had some Windows and Linux (CentOS) servers that were running on real hardware. We consolidated them to a VMware ESXi host. The windows images moved over seamlessly and without issue. The core linux box with svn, wiki, bug tracker, ... would not migrate properly so we ended up reinstalling the OS and migrating the apps and data by hand. Overall the windows box took the time to copy the data + 15 minutes and Linux took time to copy the data twice and half a day to troubleshoot and reinstall.

          Nothing was particularly special in the configurations of either that I recall. I suppose we used the wrong version of linux or something. Also not sure if a HAL would help or hurt here or if it was something with vmware but it wasn't as easy as you pointed out above.

          Maybe if one of the Windows images had trouble it would have been 1+ days instead of .5 days or something but then again they didn't.

          • by rubycodez (864176) on Monday September 05, 2011 @11:40AM (#37308724)
            uh, you do realize vmware contains a huge amount of software to make that seamless M.S. Windows "physical to virtual" thing happen? Now I myself have to migrate Linux machines into vmware for certain clients, I've found easy if application configuration files understood, Linux device naming and assignment priority are understood, fstab understood, and network plugging within vmware done correctly.
          • by decep (137319) on Monday September 05, 2011 @11:45AM (#37308766)

            I assume you used the VMWare Converter P2V tool to move your servers, which works very well for Windows and not as well for Linux. VMWare Converter fiddles with the underlying Windows configuration so the image will work well on VMWare.

            If you had used a Linux cloning tool, such as Clonezilla, you probably would have had a different experience. Of course, some older distros such as RHEL4/CentOS4 also did stupid things like the initrd would only contain the SCSI driver it needs to boot on specific hardware. Sometimes you would have to go back to the original hardware and tell it to store/load the SCSI driver for VMWare.

            Sometimes it pays to have better sys-admins.

        • by hedwards (940851) on Monday September 05, 2011 @12:11PM (#37308970)

          And don't forget that if you decide to upgrade from a single core processor to a multicore processor that there's an incredibly annoying procedure that involves doing a repair installation just to activate the other cores. Which I've had to do in the past and it's not fun, all because MS doesn't feel like providing a reasonable way of doing it.

          • I know what you're talking about as I have heard of doing it in NT4/2K, but I can say for certain that I did not have to do that in either XP or Vista when I upgraded from an Athlon 64 3200+ to an Athlon X2 3800+. Every computer I've worked on since then has been multicore, so I don't know if I just got lucky or what, but it just worked.

            Also, at this point I don't think anyone cares anymore, it's unreasonable to expect such an update for old OSes and no one has to worry about this on new builds since only the lowest of the low end netbook/top CPUs are single core anymore (and at least Intel's has Hyperthreading which results in the SMP kernel being used anyways).

            Now if you accidentally install a machine with your BIOS set to emulate IDE on the SATA ports and want to switch it to proper AHCI mode, you're in for a world of hurt. Supposedly that can be fixed by a procedure similar to the SMP switch on Win2K, but I never got it to work and have reinstalled two freshly built machines recently because some BIOSes are stupidly set in that mode.

            • by hedwards (940851) on Monday September 05, 2011 @05:24PM (#37310934)

              Generally it only happens if you trade up from a Sempron to one of AMD's pin compatible multicore processors or if you're using nLite OS and got some of the settings wrong. I don't think that Intel had offerings which would allow you to go from single to multicore without changing the motherboard, I could be wrong though. I'm sure it doesn't happen that much these days.

              However, considering that XP was sort of the OS that this was most likely to occur with, they should have fixed it. I'm guessing the main reason they didn't was that they wanted to force people to upgrade to Vista.

            • by LinuxIsGarbage (1658307) on Monday September 05, 2011 @08:42PM (#37311814)
              My understanding is there's several different HAL's: ACPI, ACPI-Uniprocessor, and ACPI-Multiprocessor. If your single core is using "uniprocessor" it will automatically recognize new cores and convert to "multiprocessor". If it's ACPI, it will still only recognize one core. What's more, in Win2K, you can go into device manager and "update driver" to change the HAL. With WinXP you can only change DOWN levels, not up. That is, without an aftermarket hack, a program called "HALu" that's hard to find.
        • by KDingo (944605) on Monday September 05, 2011 @01:06PM (#37309350)

          In my first IT job several years ago I made it to create new backup systems there, and by doing so I learned one of the most amazing things about Linux, and that is the cloneability of the entire machine with a single filesystem backup.

          I tried to restore one of our webservers in an exercise. From a liveboot environment, I partitioned the disk, formatted it, mounted the filesystems, and rsynced over the root filesystem from backup. After that install the bootloader. I was just amazed that the new system booted up without a hitch (apart from complaining the system wasn't shut down properly); I was floored seeing that and showed all my fellow coworkers =)

          Of course I know that in unix everything is a file is a file, and these things are possible, but seeing that in action is an experience. I'm happy that after learning how the innards of a Linux system works, that I was able to apply it.

      • by whiteboy86 (1930018) on Monday September 05, 2011 @11:29AM (#37308630)
        > Why is HAL such a good idea?

        HAL and lack of standardised DDK is a major Linux turn away factor for many, sometimes you can't go 'open source' if 3rd party technologies and NDAs are involved. It would be more flexible to be able just optionally plug-in stuff without the hassle of sharing..
        • by Tomato42 (2416694) on Monday September 05, 2011 @12:43PM (#37309212)
          if only you could forsee that customers will want to use something other than Windows on a 1GHz Geode with 128MB RAM...

          Seriously, Linux has huge market share in anything but desktops. If you make hardware you know that someone, somewhere will want to use it with Linux. Making the driver OSS from the start will save you tons of problems in the long run.
        • by DaleGlass (1068434) on Monday September 05, 2011 @01:32PM (#37309482) Homepage

          In such a case, I do not care for what you make.

          Seriously, if Linux won't support it out of the box, I'm not buying it. Got burned before with printers that only work on specific versions of Windows before, not going to have that again.

          I only make an exception for 3D drivers and will stop doing that as soon as I can switch to an open driver.

          • by WorBlux (1751716) on Monday September 05, 2011 @02:59PM (#37309998)
            I believe CUPS is actually an example of a HAL. A single ppd file will let you drive that printer with any version of CUPS (mac, linux, freebsd, windows, whatever) (x86,x86_64, sparc, alpha, arm. mips, mipsel, PPC). dbus provices some absraction, libkb quite a bit, Fuse as well. There are some abstraction layers available for linux systems, it's just that it's done though the user space rather than the kernel.
            • by walshy007 (906710) on Monday September 05, 2011 @03:46PM (#37310308)

              ppd's are postscript printer description files. They are near human-readable and only tell what the limits of the printer are. They are used with native postscript printers.

              Native postscript printers have a craptonne (compared to non) of processing power and memory, and do most of the work themselves, hell I can plug in a usb stick with pdfs on it into mine and get it to print without a pc at all. Catch is of course you are generally looking at a few thousand for such printers.

              • by DaleGlass (1068434) on Monday September 05, 2011 @04:48PM (#37310710) Homepage

                Thousands? They're dirt cheap these days.

                Samsung ML-2850 and similar for instance: costs around $130, has a network interface and is compatible with everything, prints double sided out of the box. Box advertises it as Linux compatible even. I'm not sure if it's possible to plug a stick into it though.

                Only downside to it I can see so far is chipped cartridges, but there seem to be workarounds for that.

                • by walshy007 (906710) on Monday September 05, 2011 @11:51PM (#37312580)

                  The dirt cheap ones wind up seriously costing you in operating costs and tend not to live as long, a 5000 black page toner cartridge for the one you listed was seen for $75 cheapest, $150 on average, mine is $40 for 6000.

                  Didn't think postscript printers had hit the cheap and disposable category yet, mine are business workstation types.

                  The bugger even emails you when the toner is low, goes 30 pages per minute on a4, 15 on a3, has more than a half gig of ram.. etc etc.

                  • The dirt cheap ones wind up seriously costing you in operating costs and tend not to live as long, a 5000 black page toner cartridge for the one you listed was seen for $75 cheapest, $150 on average, mine is $40 for 6000.

                    Didn't think postscript printers had hit the cheap and disposable category yet, mine are business workstation types.

                    Depending on workload, 5000 sheets for $75 may be perfectly reasonable. I don't print a lot, yet the 2500 sheets for $75 Brother unit is perfectly fine. $40 for 6000 sheets is good, but if the machine costs me $2000 more, I may never make it back in supplies.

                    If you print a lot, yes it makes sense to consider consumables. If not, the extra cost may take years to recoup.

                    And there are plenty of printers with postscript-like functionality. Brother calls theirs "BR-Script3" to avoid paying Adobe the licensing fees. It's basically an implementation of PostScript3 though, and Brother has the PPD files available.

      • by Opyros (1153335) on Monday September 05, 2011 @12:14PM (#37308992) Journal
        HAL did seem like a good idea at first; but tell him just once to "switch to manual hibernation control" and then see what happens!

        —Dave Bowman
      • by JonySuede (1908576) on Monday September 05, 2011 @12:25PM (#37309084) Journal

        before the move
        1- remove hidden intel drivers.
        2- use something like belarc to get you serial number in case
        2- sysprep -pnp -mini -reinstall -nosidgen -reseal -forceshutdown
        move the drive or clone it to the new machine

        upon reboot windows shall detect the new hardware, it may prompt you for the installation files if your hardware differ wildly but that's all, it may also prompt you for your serial for a reactivation but you noted it at step 2.

      • Very well put. I was scratching my head over GP's post. "Why is HAL good again?" I was still trying to form up my thoughts as I read your post. Perfect. And, your are exactly right. I've moved a hard drive from one machine to another, and booted without ANY tinkering. The only tinkering that I've found necessary, is when the video drivers are incompatible, ie, an installed nVidia driver on a new machine that has a Radeon installed. And, I believe that all *nix systems have an easy command line utility to purge nVidia or Radeon, then install the opposite driver. That accomplished, the system will boot directly to your favorite GUI desktop environment!

    • by JasterBobaMereel (1102861) on Monday September 05, 2011 @09:50AM (#37308026)

      A HAL theoretically makes the system portable, but Linux does not have one (normally) and is still quite portable, and Windows has one, and is not ...?

      Reactos does not appear to have a HAL (unlike the Windows it is modelled on) but has been ported to other architectures anyway ?

    • by Rogerborg (306625) on Monday September 05, 2011 @11:08AM (#37308474) Homepage

      So, your definition of "more than Linux" is Windows NT?

      Sell it to me. What does ReactOS aim to provide that a modern Linux based distro doesn't already give me? Games? Bleeding edge graphics drivers for, uh, games?

      • by BitZtream (692029) on Monday September 05, 2011 @01:30PM (#37309462)

        Sell it to me. What does ReactOS aim to provide that a modern Linux based distro doesn't already give me? Games? Bleeding edge graphics drivers for, uh, games?

        Windows Apps.

        I know you were trying to be snarky, but you failed.

        Windows users can run just about anything Linux has to offer. Its either been ported to windows natively or will most likely run with cygwin or the like. Certainly anything with any sort of popularity has been ported to Windows.

        On the other hand, the inverse is not true. Games, as you noted, are a big gaping hole on the Linux side, in most places where Linux does have some sort of comparable package it could hardly be considered a professional equivalent.

        Of course, in reality, ReactOS will just run Windows apps badly.

        The solution is to not run an OS that doesn't suite your needs and stop fanboying it up. Linux has its place, but that isn't everywhere.

    • by 10101001 10101001 (732688) on Monday September 05, 2011 @11:18AM (#37308552) Journal

      Linux kernel is very mature at this point, but some basic functionalities like HAL (hardware abstraction layer) are not present and not even planned.

      Perhaps you should read this recent article on LWN about Avoiding the OS abstraction trap. The core point to consider that a HAL is a means to an end, not an end in itself. Linux's development doesn't need nor likely should it have a HAL like other closed OSs precisely because it doesn't deal with binary drivers. Instead, code is frequently refractored, reorganized, etc and the main issue is whether the user space ABI stays intact. All pushing a HAL would do is further constrain the kernel to maintaining another set user space ABI, which would likely end up being suboptimal since no HAL is perfect, and devote developer time to something that instead of forming organically as hardware/code demands would wall the expectations and the ability to provide functionality. Such might be great for a platform that's expected to be deployed, be infrequently changed, and for which driver development is a one-off affair, but that's pretty much the antithesis of the Linux kernel.

      Linus is perhaps happy with the current 3.x state of Linux, but lots of people demand more..

      I don't think Linus is "happy with the current 3.x state of Linux", but I wouldn't be surprised if he's happy with the development process in place that he's a part of that can change the 3.x line towards something better. The Linux kernel is constantly changing. There's unlikely to ever be a state, ie a one point snapshot, where the Linux kernel will ever make most people happy because there's too many people with too many diverse goals and they all desire to change the Linux kernel from what is to what it could be. That's the great thing about an open development model, where people can make that happen. And if nothing else, they can make their own fork of Linux if the Linus tree doesn't make them happy enough.

      I recently ventured to ReactOS website and have seen lots of activity in the SVN. This is maybe thanks to Google Summer of Code 2011 ReactOS involvement, lots of commits on daily basis in the trunk now, the project seams to be getting in motion again.

      While that's great news for ReactOS, and with no offense to the ReactOS developers, but if I did Linux kernel development, I wouldn't be jumping on board ReactOS development. ReactOS is a noble project and I'm sure in the future I'll get a lot of use out of it, but I view ReactOS as a stopgap project. That is, it's something like wine, which seems more than anything as a way to run the occasional Windows program and to allow those who are using Windows exclusively now to have a path to switching to using Linux (or OpenBSD or whatever) rather exclusively to run the occasional Windows program.

      I say this primarily because Windows is a massive beast of an OS, produced through decades of development. Trying to re-implement it with incomplete documentation, reverse engineering, etc is a task like to take many times as long and as such I can even optimistically only see ReactOS as an open Windows 2000 or Windows XP clone for the 2020s or 2030s. Having more developers might speed up the process a bit, but assuming there's already a critical mass of developers to move development forward, I think the mythic man hour and the law of diminishing returns kicks in pretty quickly, especially when it's hard to delegate a lot of the work on things when the things themselves are most a mass of "stuff we don't have documentation for but needs implemented anyways".

      Now, if one has a personal interest in having a complete open Windows clone, then please join ReactOS development. I'm certain they'd appreciate the help, even if it doesn't speed up the completing time very much. I certainly commend anyone who works to better an open project that will give advantage to oneself and others. But, I wouldn't seriously consider

    • by jimicus (737525) on Monday September 05, 2011 @11:29AM (#37308632)

      ReactOS suffers from two huge problems:

      1. It's still in alpha stage and it's aiming at a moving target. The idea is it will eventually be broadly equivalent to Windows XP/2003 - I confidently predict that by the time it becomes even remotely stable, we will look upon XP/2003 in much the same way as we look upon NT 3.51 today.

      2. Patents. We've seen what happens when a disruptive Linux-based product comes on the market with Android - everybody and his dog is suing Google. The fact that Linux doesn't try to ape Windows - combined with support from the likes of IBM - has kept Linux on the server relatively free from lawsuits (with the obvious exception of SCO) - ReactOS doesn't have anywhere near the level of support from large commercial organisations; I can't imagine many smaller companies wanting to publicly support something that is essentially painting a big target on its back and shouting "Hey, Microsoft! Aim here!".

    • by Karellen (104380) on Monday September 05, 2011 @12:10PM (#37308964) Homepage

      I recently ventured to ReactOS website and have seen lots of activity in the SVN [...] lots of commits on daily basis in the trunk now,

      "Lots"? Really? Compared to what? How many do you think is "lots"? The Linux kernel was averaging ~70 commits per day from 2.6.13 - 2.6.27 (source [schoenitzer.de] - that's every day, for more than 3 years) and I'm pretty sure the pace has picked up a fair bit from that in the ~3 years since then, as hinted at by the right hand side of that graph.

    • by BitZtream (692029) on Monday September 05, 2011 @01:47PM (#37309568)

      The kernel IS the HAL. A few commercial OSes have additional ABIs defined inside their kernel is due to their closed source nature needing an open public interface. The entire Linux kernel is open, and the entire thing is the HAL.

      And sorry to sound snide but ... ReactOS? Seriously? Its a cool concept, but ReactOS by design will always be too out of date to matter. They are reverse engineering an actively developed OS, they have a fraction of a percent of the development resources devoted to it as the OS they are trying to track.

      You don't have to be much of an engineer to know that if it takes 500 to build a building, 2 aren't going to be able to tear the building apart to see how it was built and then rebuild it again without any difference at any sort of rate that will matter. Reimplementing a fixed target is one thing, it can be done given enough time, catching up to a moving target when you have less speed/power/energy is impossible.

      • by whiteboy86 (1930018) on Tuesday September 06, 2011 @04:38PM (#37320250)
        ReactOS will catch, the work seams to be almost done anyway, they only need to reach XP compatibility and that will be enough for 90% of uses. In fact Linux is a moving target, you write a driver today and tomorrow it will not work because Linus in his infinite bazaarish wisdom decided to redesign and rename some parts of the code that you were interfacing with. Now ReactOS, if you have a driver for the WinXP scheme then it will work today and more imporantly it will work forever, thanks to well defined and stable interface.
  • by Hadlock (143607) on Monday September 05, 2011 @09:50AM (#37308032) Homepage Journal

    My "pre linux kernel" vintage Github account is going up on ebay to the highest bidder!

    Anybody? ...anybody?

  • by Ice Station Zebra (18124) on Monday September 05, 2011 @11:55AM (#37308856) Homepage Journal

    Or single point of failure. You be the judge.

  • by adosch (1397357) on Monday September 05, 2011 @12:02PM (#37308896)
    I wouldn't find this surprising at all. I don't see this as temporary by any means, but more of a 'loosing-faith' factor; I'd do the same with my life's prized work as well. I bet from now on, github is the main pickup for latest/stable/greatest kernel releases. I personally hope it doesn't, and perhaps becomes another avenue to get the kernel source.
  • by BitZtream (692029) on Monday September 05, 2011 @12:59PM (#37309304)

    Github: Your center for decentralized version control!

    Or

    Github: Your hub for RCS without a hub!

    • by petermgreen (876956) <[ten.knil01p] [ta] [hsawgulp]> on Tuesday September 06, 2011 @03:34PM (#37319494) Homepage

      Seems kinda paradoxical yet practically it makes a lot of sense.

      With a traditional VCS you have all clients acting directly on one repo with a linear history. clones/backups may be taken but in order to present a total mess everyone must agree on which repo is the master. If the master goes down everyone must agree on a new master or a horrible mess will ensue.

      With a DVCS every checkout is a repo and changesets are pushed or pulled between the repos and history is designed to be nonlinear. However there is still very useful to have one or more repos in shared (or public) locations to allow changes to be easily shared between developers (and if desired the users). Sites like github provide a place for this to save users the hassle of running their own servers.

Every program is a part of some other program, and rarely fits.

Working...