Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Linux Software

Interviews at Linux Conference Australia 18

Netsnipe writes "In a few days time, DebianPlanet will be covering Linux Conference Australia (LCA) being hosted at the University of New South Wales by Linux Australia from January 17-20 in Sydney. The timing of this year's LCA has been coincidentally close to the release of the 2.4 Linux kernel two weeks ago and it is the first major gathering of important Linux developers of the year. In the spirit of the Debian project, we at DebianPlanet want to make our interviewing process as open as the Debian distribution is with their own reporting and processes. To further this aim we are inviting everyone to submit their own questions to our interviewees and share a major opportunity to learn where the Linux community is heading towards. Our question submission system is now open to all at our website. "
This discussion has been archived. No new comments can be posted.

Interviews at Linux Conference Australia

Comments Filter:
  • I am writing a diary of my time at linux.conf.au.

    There are pictures and notes on the sessions I go to.

    http://www.rodos.net/lca2001/

  • Why isn't linux as good as xBSD?
    Did a traumatic childhood experience give you a shaving phobia?
    When was the last time you bathed?
    Why are you so damned fat - don't you excercise?
    Did you know that you have bits of food caught in your beard?
    Do you even remember what fresh air feels like?
    --Shoeboy
  • Umm, I really don't think this is right. Linux uses a scheduler that is not Round-Robin, it's fairly complicated, and uses kind of a cost accounting method (vaguely - It's been a while since I cracked my OS books chapter on Linux). The round robin is a a class you can assign processes too. Put calling Linux round-robin is just wrong. Oreilly has a page at Their site [oreilly.com] on this.


  • Have you tried?

    I installed Debian 2.2 over the network the other day. 2 bootdisks for the kernel and modules (plain old 1.44 floppies). Installed this, run cable modem authentication program, download base system, and off we go.

    The whole thing downloaded using FTP and HTTP. I had some trouble with what appeared to be a bad mirror, but that's about it.

    But I wished I had the ISOs, because boot floppies take a long time to boot, and about 1 in 3 floppies goes bad.

  • I only have 2 questions for these debian peoples.

    1) Why can't one do a network install of "debian". (Most distro's allow for inet installations) I'm not an ISO collector. :P

    2) Why do I have to download ISO's just to install the OS? Who the hell needs every package under the sun when just base (ala OpenBSD) will do.

    Linux has the hardware support that OpenBSD doesn't otherwise I wouldn't be asking.
  • boy someone can't take a joke
  • Maybe this will be 2.4's survival test along with the 'hit' tv show "Survivor" also taking place down under. I'll be listening.
  • This is silly. Round-robin scheduling implies pre-emption.

    The way round-robin scheduling works is by setting an interrupt to occur after a certain time quantum to force the user process to relinquish control of the CPU back to the OS, which dispatches a different process.

    I'm think Linux uses a much more complex version of this involving priority queues and stuff (I'm not an expert myself) but in a simple RR system the interrupt and relinquishing of CPU (without the process knowing) is also called "pre-emption". Or at least I think that was what I was taught in OS class.

    How could you do round-robin without pre-emption?

    Correct me if I'm wrong.
  • you gotta love the debian guys (including those at debianplanet). it's too bad their apt-get got stolen (just a joke, i know it wasn't stolen).
  • Out of cursiousity, what criteria determines why a process should get to preempt the scheduler? From your description its not very clear.

    A busy departmental server may have http, smb, ftp, sql and other services all running at relatively high levels of traffic and CPU -- why should http beat out smb? Is there a way to renice a task to allow to to pre-empt or is it based on some black magic?
  • It's already written. Read the latest "Kernel Traffic". It's not in the main tree yet, but 2.4.0 has already been converted to be fully pre-emptive. The develepor responsible I believe said he would like to see it merged completely by 2.5.0. Woohoo.

    Justin Dubs
  • and share a major opportunity to learn where the Linux community is heading towards.

    I assume that would be a 2.5 release ::Grin::

  • Basically anything that interupts via hardware - system calls, file IO, device ready signals, etc.

    Anything that results in a move to kernel space execution and then back to user space execution is a call for a reschedule in a pre-emptive system. In the example, data coming from the ethernet device would cause an interrupt, the ethernet module would handle the data input, and then a reschedule would occur.

    (And yes, Linux is round-robin - it chooses the order and length of timeslices based on some fancy stuff, but once you're through that, it's just round-robin. To those from WPI, I have only two words: "Fossil Lab [wpi.edu].")

  • What distro do you use? Does anyone know what distro these fine bearded gentlemen use? Slightly OT, Last summer when Linux Journal published a big picture of MadDog my son thought he was Santa. It was pretty cool.
  • I think the goals of the community have changed over time, which is what you would expect from a dynamic, living social organism.

    A while ago, the emphasis was just on creating a good, stable OS that techies would appreciate and love. Since most techies don't make their mind up about what OS to use on the basis of what they see on the nightly news or the front page of the major daily newspapers, the effect of the media was therefore largely irrelevant.

    Nowadays, the objective has shifted to World Domination and the defeat of Microsoft. We're trying to actively recruit mainstream users, and to do that, you need to concentrate on useability and public perception.

    I don't think we'll see a technical degradation in the quality of Linux, however, because the community is quite large and diverse and people will always specialize in whatever most interests them. You'll have the die-hard techies working on keeping the kernel and other system components up to snuff, and others (eg Miguel) doing the UI stuff.

    But then again, I could be wrong. Who knows? Trying to predict the course of future events in computing and IT almost never works.

  • by _xeno_ ( 155264 ) on Tuesday January 16, 2001 @02:10PM (#503614) Homepage Journal
    I'd really, really like to know when Linus and the other kernel developers are going to buckle down and rewrite Linux so that it uses a pre-emptive scheduler. Round-robin really doesn't cut it for servers or desktops. I'd love to see Linux 3.0 be based just on this direction - altering the kernel so that it uses a preemptive scheduler.

    To answer the almost inevitable "do it yourself" comments, yes, I have the source - but playing with the scheduler in this way is a major overhall. It will probably break just about every other portion of the kernel, since the kernel is designed around a round-robin scheduler. The scheduler is really the heart of any kernel anyway - changing the scheduler would require Linux developers to sit down and redesign everything so that every portion of the kernel is based around the new scheduler. Syncronization issues become the main problem. To do this "right" Linux developement on new technology basically has to stop until the scheduler is fully implemented and designed.

    To turn to a trollish tack, Microsoft has had a pre-emptive scheduler ever since NT was released. OS/2 beat them to it, and VAX beats OS/2. Almost all modern operating systems, minus Linux (I dunno about the BSDs), use a pre-emptive scheduler. A pre-emptive scheduler is ideal for both server and desktop environments. It's really sad that Linux still uses a round-robin scheduler.

    The main reason to use a pre-emptive is that everything actually moves faster - to use a web server as an example, say the web server has an HTTP daemon running and some other CPU intensive process running. For the time being, nobody is using the web server - it's sleeping, waiting for a connect. The CPU intensive process is given a 200ms time slice to crunch on numbers - half way through that, a connection comes in to the web server. On pre-emptive schedulers, the web server is then scheduled in preference of the CPU intensive process. On Linux, the CPU intensive process regains it's timeslice, and after its through, the webserver can handle the process. What's wrong with that? It takes longer for the webserver to handle requests. Some ASCII art to demonstrate:

    [other process]<Data comes in>[other process][http served] - Linux's scheduler
    [other process]<Data comes in>[http served][other process] - Pre-emptive scheduler

    What's the difference? The HTTP gets served more quickly on the pre-emptive scheduler, while the other process really loses no time (assuming it would take another timeslice after the HTTP is served to complete it's task). This time can really add up on various systems - especially with more than two processes.

  • I don't think it is so much that the media is driving anything in the Linux community. I think we have some very smart people (MadDog comes to mind) using the media to tell their story. Many of us would like to make a living working with Linux and the only way to grow the market share so this can come true is to use the media because while we may not care what anyone thinks our PHBs do. I think if anything it is making things a bit better. Look at the recent kernel release and how Linus is trying to assure that it does not break right after being released. I think all in all these gentlemen are responding to the media in a very healthy way and using them to get our goals. IMO a very good thing.
  • ...for the Linux Community? Hi. I was wondering, how important is the press to Linux, and the dissemination of information? I (speaking as a newbie;-), have noticed that everybody seems to be getting really paranoid about what the press thinks about them, and the media in general! But, only a few years ago, it didn't really matter, because everyone who used Linux was involved in the development side, and was very knowledgible, right? Now, the Linux community is becoming more like those of other OS's as it gets larger, and it is regressing to the mean (A term I learned in Higher Mathematics, yuk!).

    So it would seem that the media is becoming more important to the Linus Community, and so events like this are starting to appear. Is it healthy that the Linux Development be driven by the press and users? I thought that the success of Linux was due to the fact that it was developed for programmers by programmers, without referance to the media. I'm very interested in this. Thanks! ;)

To program is to be.

Working...