Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Cloud Open Source Linux

Linux Kernel Moves To Github 142

An anonymous reader writes "Linus Torvalds has announced that he will be distributing the Linux kernel via Github until servers are fully operational following the recent server compromise. From the announcement: 'But hey, the whole point (well, *one* of the points) of distributed development is that no single place is really any different from any other, so since I did a github account for my divelog thing, why not see how well it holds up to me just putting my whole kernel repo there too?'"
This discussion has been archived. No new comments can be posted.

Linux Kernel Moves To Github

Comments Filter:
  • Re:Great (Score:5, Informative)

    by Anonymous Coward on Monday September 05, 2011 @09:12AM (#37307834)

    18min ago: "Our DB has blacklisted one of our frontend hosts due to connection errors. We're looking into it."
    7min ago: "Our DB and frontend are friends again. The site is back up."

    From their Twitter feed []

    Their response time to this problem is a great advertisement for their services.

  • by Anonymous Coward on Monday September 05, 2011 @09:47AM (#37308008)

    No, he specifically addressed this in his post:

    NOTE! One thing to look out for when you see a new random public
    hosting place usage like that is to verify that yes, it's really the
    person you think it is. So is it?

    You can take a few different approaches:

      (a) Heck, it's open source, I don't care who I pull from, I just want
    a new kernel, and not having a new update from in the last
    few days, I *really* need my new kernel fix. I'll take it, because I
    need to exercise my CPU's by building randconfig kernels. Besides, I
    like living dangerously.

      (b) Yeah, the email looks like it comes from Linus, and we all know
    that SMTP cannot possibly be spoofed, so it must be him.

      (c) Ok, I can fetch that tree, and I know that Linus always does
    signed tags, and I can verify the 3.1-rc5 tag with Linus known public
    GPG key that I have somewhere. If it matches, I don't care who the
    person doing the release announcement is, I'll trust that Linus signed
    the tree

      (d) I'll just wait for to feel better.

  • by ledow ( 319597 ) on Monday September 05, 2011 @10:02AM (#37308082) Homepage

    Never have I had to agree with a post more.

    My employers, not particularly tech-literate, have even seen this and learned it first-hand, and have had to get themselves out of the habit that "moving that server to new hardware means configuring a new one, effectively".

    Move a Windows server - you can be in for a world of hurt unless you want to fresh-deploy it every time. Move a Windows-client, historically you'd be prepared for blue-screens because you have the "wrong" processor type (Intel vs AMD - requires disabling some randomly named service via the recovery console, for example), reinstalling the vast majority of the drivers (probably from a 640x480 safe mode) and even then can't be guaranteed to get anything back and working - not to mention activation, DRM, different boot hardware (e.g. IDE vs SATA), etc.

    Move a Linux server - unless your OWN scripts do something incredibly precise and stupid with an exact piece of hardware, it will just move over. At worst, you'll have to reassign your eth ports to the names you expect using their MAC address (two seconds in Linux, up to 20 minutes in Windows and a couple of reboots).

    Hell, you can even change the kernel entirely, or the underlying filesystem type or any one of a million factors and it will carry on just as before, maybe with a complaint or two if you do anything too drastic but almost always with no ill-effects and a 2-second resolution.

    The only piece of hardware on Linux that I have to "fiddle" is a USB-Fax modem that has ZERO identification difference between two examples of itself. You literally have no way to assign them to fax0 and fax1 except guesswork - or relying on the particular USB port name which wouldn't translate between computers. But the install has moved through four machines (from an ancient office workstation with IDE - sacrificial hardware to prove my point about its usefulness -, to a state-of-the-art server class machine with SAS RAID6 and redundant power supplies) without so much as a byte-change - just me swapping the fax modems over rather than bothering to code the change.

    And if the hardware breaks? No big deal - pull out the old machine and/or any random desktop machine (or even laptop) with enough ports, image it across byte-for-byte and carry on regardless.

    People don't get that this is a BIG feature that they should be pushing - whereas with Windows I've heard (and seen) horror stories about RAID cards not working without the exact controller/firmware/driver combo that they were setup with, blue-screens and hangs and activation dialogs when you attempt something like that, not to mention HOURS of fiddling to get the image running exactly how it was on the original machine (if that's even possible). It goes along with the "plaintext" / "plain file" backup strategy (hell, my /etc/ is under automatic version control with two commands!), etc.

    The point of an OS is to make the software independent of the underlying hardware. Windows lost that independence a LONG while ago (Windows NT / 95). Linux still has it because of the underlying design of the whole thing.

    Don't even get me started on restoring an "NT Backup" without having the exact correct hotfix/service pack setup that you were backing up from...

  • by qbast ( 1265706 ) on Monday September 05, 2011 @11:09AM (#37308480)
    Sure it did. I tried booting Windows 7 32bit installation on different machine after laptop died. Both were Fujitsu-Siemens laptops with Intel cpus bought about 2 years apart, but Windows did not boot even in safe mode. Installation CD has some 'boot repair' mode, but it did not manage to do anything useful.
  • by 10101001 10101001 ( 732688 ) on Monday September 05, 2011 @11:18AM (#37308552) Journal

    Linux kernel is very mature at this point, but some basic functionalities like HAL (hardware abstraction layer) are not present and not even planned.

    Perhaps you should read this recent article on LWN about Avoiding the OS abstraction trap. The core point to consider that a HAL is a means to an end, not an end in itself. Linux's development doesn't need nor likely should it have a HAL like other closed OSs precisely because it doesn't deal with binary drivers. Instead, code is frequently refractored, reorganized, etc and the main issue is whether the user space ABI stays intact. All pushing a HAL would do is further constrain the kernel to maintaining another set user space ABI, which would likely end up being suboptimal since no HAL is perfect, and devote developer time to something that instead of forming organically as hardware/code demands would wall the expectations and the ability to provide functionality. Such might be great for a platform that's expected to be deployed, be infrequently changed, and for which driver development is a one-off affair, but that's pretty much the antithesis of the Linux kernel.

    Linus is perhaps happy with the current 3.x state of Linux, but lots of people demand more..

    I don't think Linus is "happy with the current 3.x state of Linux", but I wouldn't be surprised if he's happy with the development process in place that he's a part of that can change the 3.x line towards something better. The Linux kernel is constantly changing. There's unlikely to ever be a state, ie a one point snapshot, where the Linux kernel will ever make most people happy because there's too many people with too many diverse goals and they all desire to change the Linux kernel from what is to what it could be. That's the great thing about an open development model, where people can make that happen. And if nothing else, they can make their own fork of Linux if the Linus tree doesn't make them happy enough.

    I recently ventured to ReactOS website and have seen lots of activity in the SVN. This is maybe thanks to Google Summer of Code 2011 ReactOS involvement, lots of commits on daily basis in the trunk now, the project seams to be getting in motion again.

    While that's great news for ReactOS, and with no offense to the ReactOS developers, but if I did Linux kernel development, I wouldn't be jumping on board ReactOS development. ReactOS is a noble project and I'm sure in the future I'll get a lot of use out of it, but I view ReactOS as a stopgap project. That is, it's something like wine, which seems more than anything as a way to run the occasional Windows program and to allow those who are using Windows exclusively now to have a path to switching to using Linux (or OpenBSD or whatever) rather exclusively to run the occasional Windows program.

    I say this primarily because Windows is a massive beast of an OS, produced through decades of development. Trying to re-implement it with incomplete documentation, reverse engineering, etc is a task like to take many times as long and as such I can even optimistically only see ReactOS as an open Windows 2000 or Windows XP clone for the 2020s or 2030s. Having more developers might speed up the process a bit, but assuming there's already a critical mass of developers to move development forward, I think the mythic man hour and the law of diminishing returns kicks in pretty quickly, especially when it's hard to delegate a lot of the work on things when the things themselves are most a mass of "stuff we don't have documentation for but needs implemented anyways".

    Now, if one has a personal interest in having a complete open Windows clone, then please join ReactOS development. I'm certain they'd appreciate the help, even if it doesn't speed up the completing time very much. I certainly commend anyone who works to better an open project that will give advantage to oneself and others. But, I wouldn't seriously consider

  • by JonySuede ( 1908576 ) on Monday September 05, 2011 @12:25PM (#37309084) Journal

    before the move
    1- remove hidden intel drivers.
    2- use something like belarc to get you serial number in case
    2- sysprep -pnp -mini -reinstall -nosidgen -reseal -forceshutdown
    move the drive or clone it to the new machine

    upon reboot windows shall detect the new hardware, it may prompt you for the installation files if your hardware differ wildly but that's all, it may also prompt you for your serial for a reactivation but you noted it at step 2.

"Everyone's head is a cheap movie show." -- Jeff G. Bone