Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Linux Software

The Linux Kernel Archives Gets Major Update 36

hpa writes "The Linux Kernel Archives, kernel.org, has gotten a major facelift! After suffering with insufficient bandwidth for far too long, we are now operating with a dedicated 100 Mbit/s connection from Globix. VA Linux gave us a really nice new box to run it on, too; so it is really a wonderful setup we have gotten. Not only should this resolve the bandwidth shortage, but hopefully we'll be adding new services soon. We have already added anonymous rsync services for the benefit of unofficial mirror sites. I'd appreciate hearing requests for new services on kernel.org, just email me. "
This discussion has been archived. No new comments can be posted.

The Linux Kernel Archives Gets Major Update

Comments Filter:
  • Keep in mind that Globoix donated a rack and the 100Mb/s connection. This is a non trivial expense and it signifies a willingness on their part to play nice with the Linux community.
  • Even though its undoubtedly being slashdotted as we speak, it loaded before I could blink.
  • Man, you're going to need a quad-processor host with 4 ethernet cards running NT to keep up with that bandwidth... :-)

    Oww! Stop hitting me with that mackerel!
  • from the transmeta.com html source:

    "
    There are no secret messages in the source code to this web page.
    There are no tyops in this web page.
    "

  • by Anonymous Coward
    It is true that rsync does not work effectively on gziped data? I heard something about some guys at our lug wanting to get a change into the debian version of zlib, so that rsyncs from debian would be faster and use less bandwidth. Apparently the change would be compatible with the existing gunzip algorithm, but the gzip algorithm would be changed so that if it generates the gziped file in chunks. The way it is now, if you change one byte in the data that is to be compressed the whole gzip file is changed and so does not work well with rsync. The new method would result in slightly larger files however. Can someone confirm or deny this? Add some more details? Seems like a worth while change if this is correct.
  • by Anonymous Coward
    Unfortunately it's true, gzip kills rsync very badly. I've often found that decompressing the file on both ends actually speeds up the transfer!

    The compressed chunk idea sounds interesting. I'm not certain if it would work, but it would be nice. A similar alternative would be a block compressor back-end to rsync that ungzips the source, block-compresses it, then sends/compares that over the wire. On the other end, the same thing could happen, gunzip block-compress, compare. A severe drawback to this is it's extremely CPU intensive so I think chunk-gzip would be better so long as the "slight" size increase is indeed small. (I don't think your typical user would appreciate a significant packing size increase to help out rsyncers.)
  • When I first hit reply, I was going to say something along the lines of "Don't be silly" then I stopped and thought a bit and there's no reason for this not to work, because of kernel modules.

    You'd have to be able to pick a kernel from a list, depending on your architecture, but after that, compiling half-a-dozen kernel modules isn't going to kill your elderly 486, is it?

    In my perfect world, the ONLY time you'd ever recompile the whole thing (or download a new one) was when the version changed enough for you to want/need it.

    Can any kernel gurus tell me how feasible it would be to restrict the kernel compilation stage to new modules?
    --
  • I think once again we owe a big thank you to VA Linux for providing yet another non-commercial site with the hardware they need to successfully provide their service for the number of hits they receive. I can't name how many times we have seen this, and it's just great that it's going on. A true sense of the linux community spirit. :)

    --
    Scott Miga
  • I noticed on the new Kernel.org site is a link to Transmeta. It seems that they updated the site. Cool.

    No Secret Messages Here [transmeta.com] - Uh-huh.

    Later,
    Justin
  • Agreed. VA has a long track record of being very, very generous when it comes to helping out the community. I can attest to that, personally..both of the projects i've managed have been sponsored by VA in one form or another.

    All helpfulness aside, it actually makes sense for them to be so generous. By doing so, they strengthen their foothold in the market, increase their name recognition in the industry, and get to enjoy a return on their investment once these various projects bear fruit. By sponsoring all these community projects, theyre ensuring their own survival in what may turn out to be a horribly competitive server market in a few years. Its damn good business sense, if you ask me.

    Keep in mind, tho, VA isn't a charity organization. Theyre a business, like any other..and pretty soon a board of directors will be calling the shots. Lets hope they see the same benefits. :)

    Bowie J. Poag
  • I would love to find interviews of kernel hackers each 2 weeks or each month. They would explain us the state of their work, why they did such or such a choice in their code, what they plan to do and how we can help.

    Maybe it would help to find new hackers if the actual guys speak about their work.
  • Yes, download the patch.

    And instead of make menuconfig or make config you should do a make oldconfig, which only asks for new options.
  • Just curious... how much does 100Mb/s bandwidth to the Internet cost? It's not gonna be cheap (at least, if the prices of 2Mb/s leased lines in the UK are anything to go by).
  • Neat, now I don't have to worry about the DL time of those kernal sources.

    make dep; make clean; make bzImage

    p.s. I find it amusing the little, Operated by Transmeta, all the way at the bottom.
  • I really didn't notice that is untill I got a cablemodem. Then it makes sence. Downloading a new kernel used to take forever. The world is much difrent when you think 10k/s is 30 times to slow.
  • I had wondered why my logins to kernel.org recently had been suceeding. In general, I try to use mirrors, but when I know that a pre-patch has been released very recently (aka in the space of the past hour or three), the chances of mirrors having it are pretty small :(

    Anybody know of any good mirrors that update on a very regular basis, or even better, are push-updated?

    --
    Jeremy Katz
  • by tilly ( 7530 ) on Monday November 01, 1999 @06:40PM (#1570878)
    They can start with my favorite kernel site, Kernel traffic [linuxcare.com]. If you want to have a reasonable sense of what is going on with the kernel but don't want to follow the mailing list - then visit this site every week.

    By and large the sorts of services that people need are already available. They should recognize that, list a few, and then move on.

    I would say that some advice on kernel programming would be good. Sprinkle said advice with links to a few of Torvalds' rants on sending patches. :-)

    Cheers,
    Ben
  • by pb ( 1020 ) on Monday November 01, 1999 @07:04PM (#1570879)
    But remember, guys, when slashdot announces the next super-duper must-have kernel update early, and the ftp site is swamped, relax, take a deep breath, and...

    Check the mirrors!

    Download the patches!

    ...because more bandwidth never means sufficient bandwidth.
    ---
    pb Reply rather than vaguely moderate me.

  • Regular users like you and I should be using a mirror, like ftp.us.kernel.org [kernel.org].
    Click here [kernel.org] for more details on the kernel archive mirror system.

    As a side note, I still haven't heard a reasonable explanation for how and why there is a kernel mirror in Antarctica [kernel.org].

  • its great that kernel.org has more bandwidth so that it can handle more traffic. now, when the new kernel comes out, more people will be able to get it quicker. although it may not be as revolutionary as 2.2, 2.4 should bring a lot more people onboard and the linux community will continue to grow. personally i can't wait to log onto kernel.org and get 2.4. good job, all of those who run the site.
  • I clicked on the "operated by transmeta" link, and it took me to the updated transmeta page. how come this wasn't a slashdot story!?!

    instead of saying "This web page is not here yet!" it says "This web page is not here yet! ...but it is Y2K compliant. "

    (Hey! You moderator! I said it was offtopic in the subject!)
  • That I will be able to retire my cron script that would wait until 4am to download the latest kernel.

    Now I'll be able to get a new kernel anytime I want!



  • It is so cool how VA helps out as much as they do. Is it just me or do they donate to everything linux be it boxes or bandwidth. I bought my last box from them and while a little on the expensive side, they used the absolute best parts and when my power supply died there was another one waiting for me in the morning without a hassle. Think about it though, half the linux sites seem to have some connection to VA. Any other thoughts on them. I mean maybe it is a little PR but I dont even care.
  • Is the 486 the only machine you have? I've pondered building 3/486 kernels on my celeron box using the cpu-specific flags in config (ok, menuconfig.. sue me). I'd think theoretically it should work.. but I'd at least make damn sure the compilers match (or were at least the same vintage) between boxes should I build add-on modules later in the future.

  • You do realize that Penguins live in Antarctica right ?
  • If you'd looked closer you'd notice there isn't actually a mirror *in* Antarctica... just a mirror who has volunteered to handle Antarctica traffic. I doubt it adds significant to their bandwidth consumption :)

  • Actually, distributions sometimes offer just what you are looking for. I know, off the top of my head, that RedHat and Debian both have "packages" that have generic images of the latest kernels. You'll have to be proficient with handling modules however because the typical strategy is to include as much as possible modular to satisfy the masses. I know that the debian kernel-image packages are very slick, they install all you need to safely boot up with the new kernel without being too risky about it and allowing you to return to your old kernel if things go bad.
  • The really do know how to make friends in the Linux Community. I bet there will be far less "static" on their inevitable IPO than Redhat had on theirs.
  • "bzip2 compresses files in blocks, usually 900kbytes long."
    http://www.bzip2.org/bzip2/docs/ manual_2.html#SEC8 [bzip2.org]

    This would seem to suggest that it would work better than gzip (without the above-mentioned change to the gzip process). Rsync would only have to send the updated ~900k block, instead of the whole file (and part of that block may be identical). Still not ideal, but the best thing for now. bzip2 compresses better anyways..

  • What about offering some sort of tarball of pre-built kernels for some of the newer kernel versions? My poor 486 can barely handle the strain of compiling 2.0.3x. I'm really wary about moving up to 2.2.x, though I'm not really sure that would be a good idea anyways. I mean, if it's not broke, right? ;)

    I wonder how much of the kernel could be precompiled anyways for those of us who like to be on the cutting edge, but hate taking those 5 minutes out to recompile the latest unstable kernel. Oh well.. I can always use that time to get a cup of coffee or something. :)
  • Funny, it's located in Canada.

    Wrong pole, guys :-)*

    Actually, the FTP login message explains most, if not all.

    (hey, the +1 bonus is back! I guess I'll celebrate by hitting the "No Score +1 Bonus" button below)
  • One day linux mirrors should implement cvsup servers. Good compression, you don't download everything, only changes. Most people would be doing so after an initial version.

    Only people who want to download the entire thing will (by deleting the kernel source dir).

    Its also great for going back in history for kernels and downloading those too.

    Yes, I know most people will think *bsd. But linux can borrow this as well. Heck, they borrowed the entire concept of unix by cloning it. heh
  • I"ve been doing this for a while. I've got a 386 (25MHz, 6 MB RAM) and a 486 (33 MHz, 16 MB RAM) both running 2.2.5. I compiled their kernels on my PII 450 in an NFS exported directory, doing the make bzImage and make modules on the PII, then did the make modules_install and copied over the zImage (or bzImage, I forget which I used) on the 386/486.

    They've been up for months since then, with no problems.

    Out of curioisity, does anyone know how long it would take to compile a 2.2.5 kernel, and about a dozen modules, on a 25 MHz 386 with 6 MB of RAM?

    --Ryan Cleary
  • That's because rsync is trying to do the compressing itself, obviously redundantly. You should have a "dont compress =" directive in rsyncd.conf for the appropriate files, to avoid this problem.

    In other words, it's not a problem for a properly configured rsync site.

"An entire fraternity of strapping Wall-Street-bound youth. Hell - this is going to be a blood bath!" -- Post Bros. Comics

Working...