Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Linux Business Software Linux

The 2.7 Kernel: Back To The Future For Linux 437

Anonymous Coward writes "Now that the Linux 2.6 kernel has been released and is being worked into distributions, many in the open-source community are turning their attention to the next development and test kernel, known as the 2.7 tree. To get an early glimpse at some of the thinking going into the next kernel, key vendors that aid in shaping the Linux kernel helped eWEEK last week put together a long-range wish list for 2.7."
This discussion has been archived. No new comments can be posted.

The 2.7 Kernel: Back To The Future For Linux

Comments Filter:
  • "The 2.6 kernel is a server release, so we can expect to see a greater desktop focus, which will be beneficial to us, as more users will be able to use Linux to run their clients really well."

    • by Anonymous Coward on Tuesday January 27, 2004 @07:19PM (#8106682)
      That was quoted from an Oracle Exec. let's not get ahead of ourselves. I'm not sure how 2.6 is considered a "server" release. The Kernel is the kernel. 2.6 will be the default kernel on desktop installs in a few months I'm sure.

      A lot of the patches in 2.6 benefit both the server and desktop camps equally. The scheduler and VM improvments and XFS. I believe RedHat backports those patches to the 2.4 kernel for the ES/AS/WS versions.

      If you haven't tried 2.6 yet, you really should. I noticed a considerable increase in X response time with it.
    • They need to a standardize on a graphics API. I would say OpenGL, and not X.

      OpenGL comes supported by the hardware guys, and X windows actually could be written on top of it, so that you don't need a new X server for each hardware revision. Harware accelerated anti-aliased alpha blended window manager running at 100fps. It's very doable. And, it'll be used by non-windowing devices (like game-boxes) that don't need X-windows.

      This helps installation ease of use: Define the interface spec, and let the h
  • The article was ok and all, but where is the list of long awaited features???
  • So... (Score:5, Funny)

    by }InFuZeD{ ( 52430 ) on Tuesday January 27, 2004 @07:09PM (#8106530) Homepage
    I go to read about the 2.7 Linux Kernel and I get an advertisement telling me that Linux costs 11%-22% more on average in 4 out of 5 workload scenarios... I immediately lost interest in the 2.7 kernel and just got angry at Microsoft.

    So that is their plan... the whole Yoda "hate blinds" plot... darn they're good.
    • Re:So... (Score:4, Interesting)

      by hcg50a ( 690062 ) on Tuesday January 27, 2004 @07:14PM (#8106608) Journal
      Don't get mad at Microsoft; get mad at eWeek for placing the the silly ad where they placed it.

      I thought it was hilarious for the ad to be completely surrounded by the article about the Linux Kernel release.

      Almost makes you wish SCO was in the news business....
      • No, I'm sure there was some specification by Microsoft on what types of stories they wanted their ads placed next to. If they didn't request something on this story in particular, I would bet they at least asked for Linux spots in general. Along, of course, with lots of other topics of interest. I mean, why shouldn't they?

        I have certainly noticed an increase in their placement on Linux stories in the last year or so...

        But that's fair...I mean it just makes sense. Of course, the Windows/Linux TCO testing i
      • Re:So... (Score:3, Funny)

        by kfg ( 145172 )
        Don't get mad at Microsoft; get mad at eWeek for placing the the silly ad where they placed it.

        Yeah, I can do that, but then that's their business so they're not likely to give it up. Microsoft payed them to put the ad there.

        This sort of placement is so common these days I barely even notice it. It's the ironic pairings that catch my attention these days -- Like when a broadcast of Brave New World was sponsored by Zoloft with the their little bouncing sad face/happy face cartoon.

        "Do you feel depressed?
    • Re:So... (Score:3, Interesting)

      by WhiteDeath ( 737946 )
      I actually read the ad....
      I wonder - did they use people who had no experience with windows to compare against the support costs for people who had no experience with linux?

      Given that a windows desktop server can cost several thousands of dollars to buy software for, before you pay someone to actually install and configure it, are they saying it cost them several thousanddollars to get the linux server working?

      Takes me less than a day to get a working, configured server linux server... (two if I downlo

  • Let's hope (Score:5, Funny)

    by iminplaya ( 723125 ) on Tuesday January 27, 2004 @07:10PM (#8106543) Journal
    that they remove all the SCO code this time. Maybe then it will fit on a floppy again.
  • move along (Score:5, Insightful)

    by Anonymous Coward on Tuesday January 27, 2004 @07:10PM (#8106550)
    There is nothing specific about anything. What a useless article. You can say you want a milkshake with your 2.7 kernel and it be just as valid as the things mentioned.
    • Yeah...yeah, that sounds good! But make mine a malt, ok? A Chocolate malt.

      Oooh...and some onion rings.

      Right. 2.7 kernel, Chocolate Malt, Onion Rings.
  • by BizDiz ( 723499 ) on Tuesday January 27, 2004 @07:11PM (#8106568)
    Is just great driver compatability. That seems like the primary hurdle that can really keep people out, as well as a large area that is easily neglected in a more server-oriented mindset (especially in terms of user peripherals).
    • It's not neglected it's just there isn't the man power to reverse engineer every piece of hardware in existance to make a driver for it. Pretty much all the mainstream stuff is supported (some with a vendor driver like the nVidia driver) and if you need/want Linux support for something that doesn't have a driver talk to the manufacturer. Many companies like Intel, Dell, etc are writing and maintaing GPL'd drivers in the kernel these days.
    • by Cthefuture ( 665326 ) on Tuesday January 27, 2004 @08:16PM (#8107305)
      I want to second this opinion. One of my major problem areas with Linux has been the drivers or lack of.

      I know the die-hards will nay-say this, but being able to use native Windows drivers would be absolutely great. Now, maybe you don't use MPlayer (and the other "native" driver apps) but there are a hell of a lot of us that do and love it. The same thing should be done for all drivers. Video, USB, firewire, PCI, whatever... Make it so we can use Windows drivers in Linux because there are way too many half-assed reverse engineered Linux drivers that just don't work right. I mean, when in the hell will my Wacom Intuos2 tablet finally work correctly?! (I this is not just a kernel problem but XFree too) Yes, yes, I know about those patches here and there, but try to get them to work with XFree 4.3 and kernel 2.6... Ain't gonna happen. Just let me use the Windows drivers please.

      I don't give a crap about some utopian vison of Linux greatness because all manufacturers support Linux. It isn't happening any time soon and I have real work to do.

      With that said, my #1 greatist wish for 2.7/8 would be to get the damn SBP2 Firewire drivers working correctly. Dammit, that thing has been broken since it was introduced. Nearly every time I boot my system I have to plug and unplug the firewire cable (sometimes several times) to get the devices reset and loaded properly so I can access them (I'm using kernel 2.6, but has always been broken like this). The read/write/timeout errors have gotten better but they still occur with large drives. I'm absolutely terrified that one day I'll have to fschk my 90 GB partition on my firewire drive again. The last couple times I had to do that it toasted the partition every time (I/O errors and timeouts).
    • This problem is disappearing nowadays because most devices you'll find in a desktop use standardized interfaces. OHCI/UHCI for USB 1.1 and EHCI for USB2 controllers, USB mass storage, Firewire DV devices, Firewire storage devices, PTP mode cameras...most recent hardware is really easy to support, except for sound cards and graphics cards. The sound card manufacturers seem to make specs available because most cards have support in ALSA, and the graphics card manufacturers have all turned evil and release pro

    • Back when I "upgraded" to XP, I found my scanner had NO drivers (and still doesn't), and my NVidia TNT2 (ASUS V3800) with video in/out had drivers, but the video in/out didn't work.

      I moved my scanner to my linux server and installed "sane". I installed "sane-twain" (free/OSS software) on my XP box, and it then accessed the scanner on the linux box quite happily. Some of the icons weren't as pretty as the windows driver, but all the same stuff was there.

      Later I installed a dual-boot setup on my workstati

  • by Togakure ( 744222 ) on Tuesday January 27, 2004 @07:11PM (#8106571) Journal
    Something that will autoconfigure the desktop (using voice commands of course, not this obsolete keyboard thing) while serving me a pint of Guinness at the same time...
    • Something that will autoconfigure the desktop (using voice commands of course, not this obsolete keyboard thing) while serving me a pint of Guinness at the same time...

      Actually, the current kernels do this. Here is how:
      1) go on IRC on a linux channel, and say something like "man linux really sucks - on windows, I can just double click on a cd icon and it will install the drivers, but when i try that in linux, it never works"
      2) this will offend some guru's view that linux is perfect, so he will try an
  • Dear Linus, (Score:5, Funny)

    by Anonymous Coward on Tuesday January 27, 2004 @07:11PM (#8106573)
    I have always felt that Linux is a nice operating system (for hobbyists and
    geeks), but there are some areas where it is seriously lacking, especially when
    compared to its main competitor, Microsoft? Windows?.

    * File sharing. Windows has long been superior when it comes to making large
    amounts of files available to third parties. Even early versions of Windows
    automatically detected and made available all directories thanks to the built in
    NetBIOS-powered file sharing support. But Microsoft has realized that this
    technology is inherently limited and has added even better file sharing support
    to its Windows XP operating system. "Universal Plug an Play" [slashdot.org] will
    make it possible to literally access any file, from any device! I think
    universal file sharing support needs to be built into the Linux kernel soon.

    * Intelligent agents. With innovations like Clippy, the talking paperclip
    [dmu.ac.uk] and Microsoft Bob, Microsoft has always tried to make life easier
    for its customers. With Outlook and Outlook Express, Microsoft has built a
    framework for developers to create even smarter agents. Especially popular
    agents include "Sircam", which automatically asks the users' friends for advice
    on files he is working on and the "Hybris" agent, which is a self-replicating
    copy of a humorous take on "Snow-White and the Seven Dwarves" (the real story!).
    Microsoft is working on expanding this P2P technology to its web servers. This
    project is still in the beta stage, thus the name "Code Red". The next versions
    will be called "Code Yellow" and "Code Green".

    * Version numbers. Linux has real naming problems. What's the difference
    between a 2.4.19 and a 2.2.17 kernel anyway? And what's with those odd and even
    numbers? Microsoft has always had clear and sophisticated naming/versioning
    policies. For example, Windows 95 was named Windows 95 because it was released
    in 1995. Windows 98 was released three years later, and so on. Windows XP
    brought a whole new "experience" to the user, therefore the name. I suggest that
    the next Linux kernel releases be called Linux 03, Linux 04, Linux 04.5 (OSR1),
    Linux 04.7B (OSR2 SP4 OEM), Linux 2005 and Linux VD (Valentine's Day edition).
    Furthermore, remember how Microsoft named every upcoming version of Windows
    after some Egyptian city? Cairo, Chicago and so on. I think that the development
    kernels should be named after Spanish cities to celebrate Linux' Spanish
    origins. Linux Milano or Linux Rome anyone?

    * Multi-User Support. This has always been one of Microsoft's strong sides,
    especially in the Windows 95/98 variants, where passwords were completely
    unnecessary. Microsoft has made the right decision by not bothering the user
    with a distinction between "normal" and "root" users too much -- practice has
    shown that average users can be trusted to act responsibly and in full awareness
    of the potential consequences of their actions. After all, if your operating
    system doesn't trust you, why should you trust it? (To be fair, Linux is making
    some progress here with the Lindows [lindows.com] distribution, where users are
    always running as root.)

    With Windows XP, Microsoft has again improved multi-user support. Not only
    does Windows XP come with a large library of user pictures that are displayed on
    the login screen, such as a guitar and a flower, it also has "quick user
    change". This makes it possible to login as a different user with a simple
    keyboard shortcut, and the good news is: programs from the old user keep running
    in the background! Beat that, Linux!

    * Programmability. Microsoft has always been known for making computer
    machine power accessible to end users. The operating system comes with many
    helpful tools such as VBScript, a programming language especially useful for
    developing intelligent agents as mentioned above, and QBASIC, a truly innovative
    "hacker" tool that makes it pos
    • Re:Dear Linus, (Score:2, Insightful)

      by AnyoneEB ( 574727 )
      For example, Windows 95 was named Windows 95 because it was released in 1995.
      What country do you live in? Windows 95 come out in 1996! (hence the name ~_^)
    • by offpath3 ( 604739 ) <offpath4@nOspAM.yahoo.co.jp> on Tuesday January 27, 2004 @07:47PM (#8107000)
      Linux VD

      I'd heard the GPL was viral, but this is taking it a little too far! =)

    • in the years i've been reading /. - i think this is the only time i've EVER clicked "Read the rest of this comment...".
      Well done, you funny bastard!
  • MPPE? (Score:4, Interesting)

    by Malc ( 1751 ) on Tuesday January 27, 2004 @07:11PM (#8106576)
    Is there any reason why after all these years we don't have MPPE in a stock kernel? I always have to get a specially built kernel so that I can use pppd to connect to a MSFT/Windows VPN server. I use somebody else's build (deb http://www.vanadac.com/~dajhorn/projects/debian-pp tp woody main) which makes my life much easier, but it's not released as fast as the stock kernels.
  • by lawpoop ( 604919 ) on Tuesday January 27, 2004 @07:13PM (#8106600) Homepage Journal
    Does it strike anyone else as strange that everyone keeps dreaming up more stuff to throw into the kernel? What happened to the unix philosophy of small, independent programs that do one thing well?

    I'm aware of projects such as The Hurd [gnu.org] -- this seems to follow closely the unix philosophy, but it's a ways off from general usability. Others have noted that it's usually easier to debug a monolithic program than to debug communication problems between small unixy programs. (Maybe there is some way to make a communications chart of said small programs, so that it looks like monolithic code? )

    Discuss.

    • by Mr. Underbridge ( 666784 ) on Tuesday January 27, 2004 @07:18PM (#8106666)
      Does it strike anyone else as strange that everyone keeps dreaming up more stuff to throw into the kernel? What happened to the unix philosophy of small, independent programs that do one thing well?

      That's still the idea. When they say "putting new stuff in the kernel," they really mean "new options that you *can* compile into the kernel." Don't like Ham radio support in your kernel? Don't compile it in. Same for multiprocessor support, or virtualization support, or whatever the hell they throw in that you happen not to want.

      That's the beauty. Now - you *are* compiling your own kernels, right? Cuz if you blindly use whatever default kernel RedHat or whoever throws at you, that's not so good maybe. ;)

    • Clustering support has to go into the kernel to create clusters which appear as a single system, at least if you have a monolithic kernel to begin with. This isn't true of microkernel-based operating systems. Since Linux isn't one of those, anything that's running in user mode is not going to be as fast as stuff that's in the kernel. Virtualization can be done without kernel modifications, but with them, it will be much faster.

      The Unix philosophy has never been to keep stuff out of the kernel when it make

  • Re: (Score:2, Insightful)

    Comment removed based on user account deletion
  • by Eberlin ( 570874 ) on Tuesday January 27, 2004 @07:14PM (#8106605) Homepage
    I saw something about clustering support. Not much of a list. There's gotta be more than that. "Focusing on the desktop" does not make a list...it's too vague. Any specifics?

    Then again, I suppose you're not going to get very specific on an e-week article.

    Don't get me wrong. I'm all excited about 2.6 making the distros and then hearing about what awesome stuff they'll have on 2.7 -- but this article really just leaves me hanging.
  • by ducman ( 107063 ) <slashdotNO@SPAMreality-based.com> on Tuesday January 27, 2004 @07:15PM (#8106614)
    After a frustrating weekend trying to get a High Point SATA card working in my Linux server, I'm putting better SATA support on the top my my wish list!
    • by ender81b ( 520454 ) <wdinger@g m a i l . com> on Tuesday January 27, 2004 @07:53PM (#8107068) Homepage Journal
      Which brings up a good point for the 2.7 kernel. You might have better SATA support if they would actually freeze a kernel driver api.

      How about we stop politicizing the kernel and actualy make a stable Driver API? One that doesn't change with every point release of the kernel?

      I know that people want open source drivers but it's extremely hypocritical to complain about companies lack of support for linux then do absoultey *nothing* to help them out by changing the api every point release. Listen, besides some fanatics nobody cares about open source drivers. People would rather their stuff just work.

      I understand that, fundamentally, open source drivers are technically a better solution but there is no chance in hell of convincing Nvidia or any other company that has substantial IP and reserach in their drivers of publishing them open source. Same thing with Intel's Centrino drivers.

      Make a stable api darnit! :)
  • One has to wonder (Score:5, Interesting)

    by krammit ( 540755 ) on Tuesday January 27, 2004 @07:17PM (#8106652) Homepage
    With so many people with their own agendas pushing and pulling at the kernel, and Linus being the steadfast leader he is, I can't help but think Linux may be headed for a fork in the not so distant future. Unless there is a way to make the kernel truly enterprise class as well as a responsive, low latency desktop system and a near real time embedded platform all at the same time.

    I'm amazed (in the good way) the kernel devs have made it as versatile as they have to this point. Hats off to them and here's to hoping they can keep it up.
    • actually the kernel is so configurable, that may well happen.
    • by adrianbaugh ( 696007 ) on Tuesday January 27, 2004 @07:34PM (#8106874) Homepage Journal
      I don't see why the two are necessarily contradictory. After all, the bits to support enterprise class hardware can easily be omitted from compiling an embedded or desktop platform: if they can make a kernel with modular scheduler and tunable latency (which was the way it seemed to be heading with Con Kolivas' patch set) then the enterprise boys can increase the latency for minimum kernel CPU usage, the desktop people can knock it down for good responsiveness and the embedded folks can plug in an alternative scheduler to suit their own particular needs.
  • by bersl2 ( 689221 ) on Tuesday January 27, 2004 @07:17PM (#8106660) Journal
    The ad du jour: Windows saved 11-22% over Linux in TCO in 4 out of 5 environments.

    From the story: Amazon, which has been running Linux since 2000, has been steadily moving its infrastructure from Sun Microsystems Inc.'s Unix servers to Hewlett-Packard Co. ProLiant servers running Linux. The company said in a 2001 Securities and Exchange Commission filing that Linux cut its technology expenses by $16 million, or 25 percent.

    I know the Amazon example is in comparison to Solaris; but still... I felt like stoking the fire.
    • Actually the article is wrong. Amazon is moving to Linux from systems running HP/UX. This has been a corporate policy since 2001 when Amazon managed to switch, in 90 days, over 2/3rds of their production servers from systems running HP/UX, Tru64UNIX and Solaris to Linux on HP Netservers. It was a completely insane yet really fun project to work on.
  • Clutering Finialy (Score:3, Interesting)

    by silas_moeckel ( 234313 ) <silas AT dsminc-corp DOT com> on Tuesday January 27, 2004 @07:19PM (#8106688) Homepage
    Not much infomation in the article but I must admit it would be nice to start having SAN/Cluster filesystems as part of stock kernels. People realy dont understand the power of these filesystems to provide security and scaleability. With modern cluters inconnects being able to serve up fiber channel multigigabit ethernet and low latency interconnects it gets easier and easier to make pure diskless compute nodes that are for more than just number chrunching.

    Think about only needing a single copy of your web server image mounted read only to the web servers themselves.

    Setting up CAD farms that all utilize direct attached storage in a shared method leaving network bottlenecks behind.

    Low end systems like firewire may even be able to attach single disks between multiple machines with similtanious access (have to check on multi initiator firewire looks posible never seen a definate though) in a safe manner.
  • by yamcha666 ( 519244 ) on Tuesday January 27, 2004 @07:22PM (#8106719)
    Will there be support for my orbiting brain lasers [geekradio.com] in the 2.7 series?
  • Who cares for 2.7 (Score:3, Insightful)

    by Corfitz ( 669547 ) on Tuesday January 27, 2004 @07:22PM (#8106725)
    ... I'm pretty sure HURD will take over any day now (and make that GNU/HURD to satisfy everyone). Joke aside, I for one hope that some kind of simple clustering will be implemented in the new kernel (possibly even with some kind of load balancing). Its doable with the current kernel series but I'm drooling over all the simulations I would able to do in parallel at the University if all computers would join the cluster by default.

    --
    No bits were harmed during the production of this mail

  • I know (Score:4, Funny)

    by Sexy Commando ( 612371 ) on Tuesday January 27, 2004 @07:23PM (#8106737) Journal
    A web browser and a media player would make 2.7 a killer kernel.
  • by winkydink ( 650484 ) * <sv.dude@gmail.com> on Tuesday January 27, 2004 @07:24PM (#8106746) Homepage Journal
    I've read and re-read the article. Other than a couple of vague references, there is no list there at all.
    • Man, re-RTFA, there's tonnes of juicy stuff in there, such as the insightful and thought provoking:-

      In fact, Dargo contends that a 2.7 wish list from each of the vendors would reflect their particular technology interests and that there will be different wishes from the different groups within those companies.

      Or, this juicy tidbit,:-

      "Some basic clustering support would be nice."

      And, some groundbreaking, earth shattering revelations, that

      "For some, additional desktop functionality would be welcome
  • by ChiralSoftware ( 743411 ) <info@chiralsoftware.net> on Tuesday January 27, 2004 @07:27PM (#8106787) Homepage
    I would like to see less things in the 2.7 kernel than in the 2.6 kernel. Getting device drivers, network drivers, etc, out of the kernel core and into modules was a step forward, but I think the next step forward would be to get these things out of the kernel entirely, and into userland. That would give Linux a huge advantage over Microsoft Windows. Installing and un-installing device drivers would become much easier for users. Manufacturers would like this too because then there would be less concern about GPL and device drivers. It would be easier to release binary-only drivers.
    • on systems with a lot of modules loaded, having them in userland would be slower than it is right now. you'd have to have another layer inbetween them clogging things up.

      i do think removing modules from the kernel package would be good - so you can download a barebones kernel(with ext filesystem only,for example) which would build and run on a VERY basic system. but things like reiser,vfat etc etc modules should be put into a separate package for downloading.(eg kernel-2.6.1-filesystems.tar.gz).
      you could t
      • The point of having lots of stuff in the kernel is so all you have to do is 'y' what you want, 'm' what you want as a module, and 'n' what you don't want. Then compile. You chose what to compile. What you didn't, does not end up in your kernel.

        It would also be very annoying to build a whole source tree from 20 different parts, plus patches. Some of us LIKE having the whole source tree in one tar.
      • those of us who need filesystems other than ext2/3 just grab the other source tarball and extract it into /usr/src/modules/'subsystem'.

        That's a testing nightmare, given how fast the kernel evolves. If there's a kernel config option in the standard kernel, the odds are that configuration of the module code will be used and tested with that version of the kernel. If there are any mismatches, incompatibilities, or other bugs, there'll be a chance of someone finding and reporting it, since there's a known

    • Why encourage binary-only drivers? I like it the way it is, the hardware makers really has to go to great length to make binary-only drivers. We should really strive for a completely free operating system, not just free apps and binary drivers. I feel that using binary-only drivers is a bit like cheating on the goal line. You know you've won but it just doesn't feel right because you cheated.
      • The reason is, not everyone adheres to the same philosophy about 'open' software (i.e. some of us think it should be a choice, not a mandate). For purely economic reasons, some software will need to be proprietary. Like it or not, people gotta eat.

        For that matter, it's fairly rediculous to have to recompile just to add a device. If it is an option for those who want to tweak the system, fine. But for someone like myself, who is more interested in using the OS than understanding all the minutia, a binary op
    • How would that even be possible? Take a NIC driver for instance, currently data from the NIC is retrieved from a certain memory address by a driver responding to an IRQ being raised. For your userland daemon idea to have any chance of working the kernel would still need a driver for the hardware to be able to initialize the hardware, respond to and acknowledge the IRQs, copy the data somewhere the userland daemon can handle it and probably other things I'm not thinking off.

      At the very least the speed would
    • by Abcd1234 ( 188840 ) on Tuesday January 27, 2004 @08:00PM (#8107126) Homepage
      Installing and un-installing device drivers would become much easier for users.

      Is insmod so difficult?

      Manufacturers would like this too because then there would be less concern about GPL and device drivers. It would be easier to release binary-only drivers.

      Since when did we care? Linus has flat out said he doesn't like binary drivers, for pretty good reasons, I think (harder to debug being the main one). Why encourage this?

      So, any other good reasons why you'd want userland drivers? Are those reasons good enough to offset the additional overhead that this would incur (additional context switching,etc)? The new layers of indirection that would have to be added?

      Frankly, I think you might have been bitten by the microkernel bug. But, sorry, Linux ain't no microkernel. And, so far, it hasn't needed to be. So, why start now?
      • Playing devil's advocate here. I'm sure you're already aware of most of these points.

        Is insmod so difficult?

        First, you'd really want modprobe. Second, for the few not using their distributions' modules, the point is that it is still more difficult than running an executable. Usually because the module in question needs to be compiled against your particular kernel, which is much less backward/forward compatible than glibc.

        So, any other good reasons why you'd want userland drivers?

        It should be more

    • I think the next step forward would be to get these things out of the kernel entirely, and into userland. That would give Linux a huge advantage over Microsoft Windows. Installing and un-installing device drivers would become much easier for users
      Maybe you should try HURD. I don't think Linux is going to go in the direction that you want.
  • Pointless article (Score:5, Insightful)

    by Theovon ( 109752 ) on Tuesday January 27, 2004 @07:30PM (#8106823)
    That article was amazingly content-free.
  • Kernel auditing (Score:3, Informative)

    by jonabbey ( 2498 ) * <jonabbey@ganymeta.org> on Tuesday January 27, 2004 @07:30PM (#8106826) Homepage

    Interesting that CA is pushing for inclusion of a kernel auditing facility in 2.7. That sort of functionality, required in a number of federal contexts, is already available in a Linux-compatible, GPL'ed code base, from Intersect Alliance [intersectalliance.com] down in Australia. The Snare [sourceforge.net] project patches the Linux kernel with auditing instrumentation, making it possible to detect abnormal system call activity that other methods don't.

    Solaris has had something like this for a long time in the form of BSM, as had Windows. Even Mac OS X has preliminary BSM support in Mac OS X Panther. It would be very great to see this kind of functionality as a config option on the Linux kernel, and hopefully sooner rather than later.

  • My wish (Score:3, Interesting)

    by chrysrobyn ( 106763 ) on Tuesday January 27, 2004 @07:34PM (#8106872)
    I want filesystem priorities. A background task that is grinding the hard drive, should only do so when a high priority task isn't using the drive, or when its data is adjacent to the high priority data the head is next to anyway.
  • Virutalization (Score:4, Insightful)

    by Goyuix ( 698012 ) on Tuesday January 27, 2004 @07:41PM (#8106938) Homepage
    They mentioned the word in passing, but I think for the kernel to provide this will be a huge benefit on many levels - and immediate benefits could be seen in projects like udev and the HAL stuff that is going on.

    Besides, machines are getting to resemble the big iron of yesterday enough that you can (and a large number of people do) run multiple OS's on a single machine. Having an underlying architecture to better support those goals would be a great thing.

    To a certain degree, it is like the evolution from a shared memory space to a virtual memory space - one of the greatest features was protection. Virtualize the entire OS (wow!) and you can run your different server apps on the same machine without the risks of one nuking the other.

    Emulation has a ton of cool things going on right now. With a swift boost from an OS designed to virtualize the hardware it would make it trivial to have multiple copies of the OS running at very near full speed with complete access to the hardware.
    • round two... (Score:3, Informative)

      by Dave_bsr ( 520621 )
      Um. We have this already, right? You can run linux virtually in linux, to do just as you describe iin paragraph 3. You can run any kind of emulator for other OS's to run on. What else would you want again?

      "..complete access to the hardware..."

      That's the point of virtualization, etc. Access to the hardware breaks the security part of virtualization and emulation. If you can access memory just like you were the original operating system, then you ARE the operating system, and you can trash anything and e
  • Today windows is plagued with viruses, trojans and worms. If Linux usage becomes more wide spread among users with little knowledge in computers, networks and security, we might see similar problems in Linux in the future. The fact that Linux is a much better acrhitecture than windows will probably not be enough to protect Linux from incompetent users.

    To prevent this, it would be nice if some kind of sandboxing technology was implemented. E.g it could be based on digital signature technologies, where appli
  • ...hardware detection. the boffins at the top secret Linux dev HQ could write their own lib(or fork kudzu,discover etc etc) which probes your hardware, tells you what its found, and if you accept the softwares proposal, it would write you a .config file.

    also, a section at the start of menuconfig called "Basic Features" would be nice. in it would be things like:
    DVD Support: Y/N
    Clicking yes would then enable all options in the kernel which are need for watching/wring dvd.(UDF filesystem, MTRR, DMA etc etc).
    i
    • Yes, because that's what linux has been missing all along! The Windows Add Hardware Wizard!!!

      Jesus cockgobbling Christ, am I the only one that thinks this guy should get karma-bombed back to -50 for this cretinish piece of ass-stinky opinion?

      Menuconfig is about as simple and consistent as it gets, and unlike some other un-named operating system, linux doesn't have a "ports" category that only sometimes includes 3rd party serial cards or USB busses. Drivers have a certain category they belong in (barring s
  • How about (proper) support of multiple USB keyboards and mice? Combine that with multiple video cards, and you can easily share a PC among users without dealing with X terminals.
  • by Bruha ( 412869 ) on Tuesday January 27, 2004 @08:51PM (#8107762) Homepage Journal
    I would like to see something in the nature of a area where all executable commands for any user software get put into.. Many programs today install theirselves into various /usr /usr/share /usr/local it just goes on and on. Reguardless of where the program installs itself I think a top level directory /usr/software where all programs put in a link back to it's working directory and main executable for all programs..

    That way all users know that their programs reside in /usr/software and it makes it easier for plugin/mod authors to know where things are.

    Either way if this is not feasable then it's time to standardize where things are going.. Windows has it's Program Files which went a long way towards fixing user confustion :) where people now know their programs (With very few exceptions) now end up.
    • FHS (Score:3, Interesting)

      by krmt ( 91422 )
      I don't get it. Isn't this what the FHS [pathname.com] already solves? I know Debian follows the FHS as part of policy, and so everything basically has a set place where it goes. The basics work like this:
      • Binaries meant for normal users go in to /usr/bin, unless they're part of the base system, in which case they go in to /bin. If they're part of XFree86's special playground, then they go in to /usr/X11R6/bin, but that's really an ugly holdover more than anything.
      • Binaries for administrators go in to /sbin or /usr/sbin
    • That's not exactly the kernel's job. It's the guys that put together your distro (Redhat, Debian, etc.) that make that decision.

      Each system has its advantages and disadvantages. By putting all files in C:\Program Files\program_name, yeah it keeps the apps nice and organised in their own directory (well not really) but it makes the command line pretty much useless. You would either have to add every subdirectory of Program Files to the path or type in the full path of the programme you want to run everytim

    • by burns210 ( 572621 ) <maburns@gmail.com> on Tuesday January 27, 2004 @10:56PM (#8108966) Homepage Journal
      KISS: keep it simple stupid...

      macos class(1-9 had a nice directory system, and i think it could be carried over in its simplicity to unix boxen)

      / /app/PROGRAM NAME /user/USERNAME /sys/

      99% of programs would install to /app/ with their own sub directory like /app/apache/ ..
      a user would have a /user/ subfolder, which would contain a user root directory(like the partitions root directory, but limited to the user... /user/NAME/ sys, doc, app, pub, etc... /sys/ would have standard libraries and other kernel and core system stuff.

      programs, system, documents. 3 basic categories... with a multi user system, you make documents become the user listing, and you have programs, system, userfiles

      3 directories, thats it.
    • by HopeOS ( 74340 ) on Wednesday January 28, 2004 @12:15AM (#8109603)
      This may come off as overly aggressive, and for that I apologize in advance, but people who haven't adminstrated *nix boxes in large-scale deployments often fail to recognize that there's a delibrate method behind the file system.

      Each one of those directories has a very distinct purpose; it didn't happen that way by accident. The difference between /bin, /usr/bin, and /usr/local/bin may seem trivial to you as a user, but from an administrative vantage point, they are very important.

      In single user mode with an ailing system, the most you may successfully get booted is the root partition. You have at your disposal only /bin, /sbin, and /lib. That means that all tools necessary for fixing the system must be there including all kernel modules and shared libraries. It must also be possible for this device to be completely read-only, possibly even residing in firmware. Installing an application in /bin while its companion libraries are on /usr/lib would be folly since the /usr partition may be completely inaccessible. You may notice that some distributions install a stripped-down, statically-linked version of vi in /bin and a full-featured, shared-library version in /usr/bin. Now you know why.

      Once booted and all the necessary kernel modules are loaded from /lib, the remaining partitions can be mounted. On a single-user machine, the /usr directory may be on the same partition as root. Often times it has its own partition. But for large-scale deployments, the entire /usr partition may be on a network share. It may also be on a CDROM. Installing software to /usr may be impossible or require a site-wide change. Secondly, it won't do to have software trying to write data to this partition, so programs and data are always separated. All data goes to /var which is normally a machine-specific mount. Also, a diskless machine may mount /var on a ram disk.

      To address software installed on individual machines, we use the /usr/local directory. If /usr is read-only, /usr/local is mounted to a separate writeable volume. All software not packaged by the distributor or site administrator belongs in /usr/local if it's machine-wide and in the user's home directory if not. Other conventions exist, including the use of /opt, but that's a site policy issue.

      So that's that. Given any package, it is a simple matter to determine if its executables go to /bin, /usr/bin, or /usr/local/bin. Libraries go to the equivalent lib directory. Header files to the equivalent include directory. Manual pages to man. Cross-application data to share. All application data goes to /var including log files and databases. All temporary files go to /tmp. If you follow these rules, there's no end to the configurations you can create. Violate any single rule and you have a machine that cannot be recovered, applications that cannot be shared site-wide, machine-wide, or between users, and data that cannot be conveniently backed up. Sorta like Windows.

      You specifically address the issue of plug-ins, but even having an application located at /usr/software/netscape won't help if the installer is looking for /usr/software/mozilla. This class of problem has been solved many times over with package configuration files and scripts. The responsibility is mainly that of the distribution maintainers to facilitate this. If it's not happening for your distro, get satisfaction, or move to a distro that cares.

      That said, the browser plug-in issue annoys me, too.

      -Hope
    • This has nothing to do with the Linux kernel per se.

      Still, the file system hierarchy is basically fine the way it is defined in the Filesystem Hierarchy Standard, LSB, etc. FHS has been around for a long time, at least eight years, as far as I can recall.

      If I compile software that isn't already part of a distro myself, I tend to configure those packages with --prefix=/usr/local/stow and then use stow to install symlinks under /usr/local. That's pretty close to what you're suggesting, no?
  • by CSharpMinor ( 610476 ) on Tuesday January 27, 2004 @08:52PM (#8107769)
    From a MS ad embeeded in the article:
    "Windows Server 2003 offers a savings of 11-22% over Linux in 4 out of 5 workplace scenarios."

    From the text of the article:
    "The company said in a 2001 Securities and Exchange Commission filing that Linux cut its technology expenses by $16 million, or 25 percent."
  • by t0ny ( 590331 ) on Tuesday January 27, 2004 @11:17PM (#8109148)
    To get an early glimpse at some of the thinking going into the next kernel, key vendors that aid in shaping the Linux kernel helped eWEEK last week put together a long-range wish list for 2.7

    And #1 on that list is... Paul, can we get a drum roll?

    #1- get rid of those damn, damn, r00t exploits!

  • by Jacek Poplawski ( 223457 ) on Wednesday January 28, 2004 @04:18AM (#8110758)

    "With a new Mozilla released, is the browser war back?

    I'm sticking with Internet Explorer
    I'm giving Mozilla a second chance
    The browser war?"


    What a dumb poll, what a dumb site. What should I choose if I am NOT using IE at all?
    Maybe there are better sites to put articles about Linux Kernel than that one?

  • Cluster File System (Score:4, Informative)

    by dotwaffle ( 610149 ) <slashdot@wPARISalster.org minus city> on Wednesday January 28, 2004 @04:41AM (#8110860) Homepage
    There used to be a cluster fs for windows called Mango - but that's now obsolete thanks to Win2003, which clusters. But Linux can't access that as far as I know. So there is a middleman - Coda. Coda is a clustered file system for use with WinNt/Win95/Linux and is already in the kernel as far as I know. Just clearing up the hole that appears to be at the bottom of the article (really... it's been in since 2.4!)

You know you've landed gear-up when it takes full power to taxi.

Working...