Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Linux Software

What Will Be in Linux 2.7? 494

Realistic_Dragon writes "The first discussion has been sighted on the Linux kernel mailing list to put together a feature list of things that should go into Linux 2.7 - including hotplug CPU & Ram support, network transparent sound and improvements to Netfilter to bring it up to the the level of OpenBSD's Packet Filter. And all this before most of us have started to run 2.6.0-preX, or even a 2.6 series stable release happening. Perhaps if you have a (sensible) idea now would be a good time to voice it, otherwise you will have to wait for 2.9 to get it included."
This discussion has been archived. No new comments can be posted.

What Will Be in Linux 2.7?

Comments Filter:
  • by bombadillo ( 706765 ) on Friday October 10, 2003 @03:04PM (#7185160)
    " Ummmm... Wouldn't you fry the motherboard by swapping a CPU when the computer's on?" Think Enterprise environment and Big Iron, not desktop machines.
  • by Feztaa ( 633745 ) on Friday October 10, 2003 @03:05PM (#7185164) Homepage
    I believe they're referring to some mainframes, in which there are bays of CPUs/RAM that can be swapped in and out while the system is running.

    CPU hotplug support is not designed for removing the processor from your single-CPU x86 box.
  • by spyder913 ( 448266 ) on Friday October 10, 2003 @03:06PM (#7185178)
    There is hardware that supports this for higher end servers. (With multiprocessor, AFAIK). Just another way to reduce downtime.
  • Two Kernel Monte (Score:5, Informative)

    by strredwolf ( 532 ) on Friday October 10, 2003 @03:12PM (#7185226) Homepage Journal
    http://www.scyld.com/products/beowulf/software/mon te.html

    Already there.
  • by Dr. Zowie ( 109983 ) * <slashdotNO@SPAMdeforest.org> on Friday October 10, 2003 @03:14PM (#7185236)
    [slashdot.org]
    User friendly configuration has been done.

    I'd settle for power management working right.

  • by Anonymous Coward on Friday October 10, 2003 @03:24PM (#7185299)
    Good 64 bit support was added in what, Linux 1.2 or so. Digital (remember them?) borrowed Linus a few Alpha boxes for the purpose. One can still find the occasional kinks in less used applications, but the kernel has been working fine on 64-bit computers for a good while.
  • by chez69 ( 135760 ) on Friday October 10, 2003 @03:42PM (#7185410) Homepage Journal
    NFS used to be userspace and was slow as shit.
  • by paulbd ( 118132 ) on Friday October 10, 2003 @03:48PM (#7185464) Homepage

    linux doesn't only ship with a timeshare scheduler. it includes both the SCHED_FIFO and SCHED_RR schedulers, which provide close-to-real-time scheduling capabilities. most pro apps in the audio realm use one or both of these. they can both be used alongside the SCHED_OTHER ("timeshare") scheduler.

    what would be more interesting would be CPU cycle reservation, which is already present in OS X, and would be very useful for any streaming media software.

  • by swillden ( 191260 ) * <shawn-ds@willden.org> on Friday October 10, 2003 @04:05PM (#7185597) Journal

    there's absolutely no reason why eg sound drivers and network cards can't be maintained independently with their own build process

    Actually there is a practical reason why they're maintained within the kernel sources and not externally. The reason is that it allows the kernel developers more freedom to change the kernel. They don't have to worry about breaking a lot of dependent drivers because if they make a change that would break drivers, they have all the driver sources and can (and do!) go update them.

    Have you tried to compile the nVidia drivers lately? It can be a pain if your kernel headers aren't quite right.

    And this is an excellent example: because the kernel developers don't have the nvidia binary-only drivers in their tree, the drivers get broken quite frequently. You can argue that they should just stabilize the kernel API exposed to modules, but that would tie the developers' hands and force them into a lot of backward compatibility kludges. For better or for worse (and I think it's for better), that's not the Linux way.

  • by temojen ( 678985 ) on Friday October 10, 2003 @04:08PM (#7185621) Journal

    When useing multiple USB keyboards all keyboards can be accessed through /dev/input/keyboard, and input from all keyboards appears on the console. (unless you don't insmod kbdev.o, and instead use /dev/input/eventx, which disables the console unless you also have a PS/2 keyboard, as well as useing a decidedly non-console like api)

    If instead there were /dev/input/keyboards optionally linked to the console, and /dev/input/keyboard0..n (like it is with USB mice), we could use multiple video cards and an appropriately modified X to build multi-seat workstations, POS terminals, etc without needing Xterminals.

    PCI VGA ~$50 vs ~$500 /XTerminal

  • by crow ( 16139 ) on Friday October 10, 2003 @04:17PM (#7185653) Homepage Journal
    What you need is a smarter boot loader. What if grub had the option to use a fallback kernel if it is invoked more than once in a one-hour period? What if you could tell grub to use a non-default kernel on the next boot only? Or some other option along those lines?

    Then you could boot your test kernel remotely, and if it failed, you could power-cycle and be back to your safe kernel.

    Another way of acomplishing this would be to implement loadlin for Linux to load your test kernel. (loadlin was a DOS program that would boot Linux on a multi-boot system used back in the days when many people used a UMSDOS file system.)
  • by AdamHaun ( 43173 ) on Friday October 10, 2003 @04:18PM (#7185664) Journal
    2.6 already supports the Athlon-64, and GCC has architecture optimizations for it as well.

    How did this get modded up?
  • Re:Two Kernel Monte (Score:3, Informative)

    by caseih ( 160668 ) on Friday October 10, 2003 @05:26PM (#7185990)
    This is not what the original poster was asking for. The kernel monte actually just does an effective reboot without going throug the bios. As I understand it, the kernel monte cannot tranfer running processes from one kernel to another. Last time I used it, the monte killed all my processes, and reloaded init (basically rebooting).

    What the poster wants (and what I want) is the ability to load a new kernel, transfer the existing kernel tables (process, resource, driver status, etc) over to the new kernel and have things continue without interruption.

    Michael
  • by cybrthng ( 22291 ) on Friday October 10, 2003 @05:28PM (#7185995) Homepage Journal
    You have to be kidding right? Dump Veritas? What are you smoking? Veritas isn't just a Volume Manager/Disksuite it is a supported/planned and critical piece of your infrastructure! You rely on Veritas because you know it is tried, true and recoverable. The excellent relation of Sun and Veritas is a reason to use the platform just like Veritas on HPUX and other platforms. Your not just paying for the software, your paying for the support and paying for the mission critical needs that demand that solution. Veritas is an EXCELLENT package and nothing in linux comes close to the Veritas & Sun solutions on certified hardware. (And if you compare the linux solutions on linux certified systems with the same performance, manageability and support you get from sun i would like to see ONE vendor that can compare!)

    I also don't believe you understand the usefullness of Sun (non linux)solutions. You keep on correlating the costs to acquisition. In the real world the hardware/software costs don't mean squat. Any large IT business knows that your biggest cost is employees, software, licensing, support and contractors.

    For one, i can spend 32,000 on a 4 way 64 bit cpu machine with 8 gigs of memory, 500gb diskspace and have Hotswappable CPU's, a VASTLY supperior backplane, Vastly superior scalability in growth and a proven reliable architecture. You Can't buy ANY linux solution/Wintel solution that comes close to the Solaris/RS6000/HPUX based systems out there. As i've stated before there is only ONE vendor that offers a machine feature comparison to suns LOW END/MID RANGE v880's and it doesn't come close in comparison to power. For example the only linux enabled hot swappable cpu/backplane/intel solution is built on 4 700 mhz pentium 3 processors and costs 24,000 for the base system. My Quad 1.2ghz v880 out of the box doesn't require anything proprietary, but on the linux solution you have to run the vendors version of linux, the vendors version of the apps compiled and can only use the vendors approved addons. Sure sun is only one vendor, but solaris is solaris. There isn't a mix match of versions, releases or there isn't a version of solaris for my v880 that doesn't work on my e10k. I can grow with a common platform to support from 1 user to 65000+ users and even cluster to support from that point on.

    You have to get your mindset away from free/cheap = better. You have to realize that in the business world the costs for platforms that are tried and true is expected and also minimal compared to the costs to keep it running.

    I would rather run my 2 terrabyte financial application on a slower sun server because of the reliability, the proven architecture and HA features. You have to remember that in my case 5 minutes of downtime costs $137,000. Suddenly a $3,000 Veritas volume management solution and a $100,000 hardware platform not only is justifyable but almost even insufficient in itself if you break out the cost vs requirements ratio.

    I can make my 3 Terrabyte Clarrion System, my Sun V880 Systems, my Sun 280/240r webservers and my solaris management workstations run for months at a time in pure harmony. The fact that NOTHING CHANGES ON A WHIM IS A GODSEND!! The stability, and slowness by which things change is the reason why businesses rely on such as the costs are far from just your hardware/os purchase price.
  • Safe Video (Score:3, Informative)

    by cgreuter ( 82182 ) on Friday October 10, 2003 @05:43PM (#7186082)

    I'd really like to have an interface to the video system that is both fast and safe. At the moment, it's one or the other. Either I use straight X11 or I let the program bang on the hardware directly via DRI, SVGALib or the like.

    I'd like to see video drivers in the kernel. Not necessarily full-featured OpenGL drivers, but something that:

    1. Sets where in memory the card is allowed to read and write so that usermode programs can't trash system memory.
    2. Provides a reliable way to reset the video state so that we can easily get the display back to a sane state after something crashes.
    3. Provides fast, well-defined access to common (i.e. not cutting-edge) video functionality, possibly by letting the user program memory-map the frame buffer, so that simple graphics stuff is easy to do and doesn't need
    4. Provides a mechanism for applications to use the cards' advanced features (e.g. 3D hardware) so that binary-only device drivers are still possible, although not as part of the kernel. (This isn't strictly necessary from a technical point of view but I don't think most of the video card makers will release GPL'd drivers for their crown jewels. They might allow them for the basic stuff, though.)
    5. Associates video state with virtual consoles so that I can switch between graphical applications by hitting ALT+Fn. (Okay, this one isn't strictly necessary but it's really cool.)

    Of these, #4 may not be possible to do safely, or may only be possible for some cards. If so, it would still be a win because a lot of applications will do fine with only the basic functionality and over time, as the bleeding-edge stuff becomes mundane, it will slowly trickle into the #3 category.

  • by AaronW ( 33736 ) on Friday October 10, 2003 @06:23PM (#7186335) Homepage
    This is actually present in TimeSys Linux, which is a very cool feature, BTW. It lets you guarantee a certain amount of CPU resources and latency.

    -Aaron
  • Re:Whatever (Score:3, Informative)

    by TheOrquithVagrant ( 582340 ) on Saturday October 11, 2003 @09:29AM (#7189292)
    And here we go with the woefully mis-informed Solaris-advocacy again.

    > I can make changes in the running kernel (instead of rebooting).

    What "changes" are you talking about here? Modules in linux can be loaded and unloaded without rebooting, and that most definitely is "making changes in a running kernel".

    > I can set control variables for the kernel on future reboots (instead of recompiling the entire thing).

    Ok, here's the thing that really irks me. Where do you people GET this idiocy from? Does SUN feed you this BS at courses or something? You can set control variables in Linux on future reboots. Just edit /etc/sysctl.conf. However, in Linux, all these control variables can also be set _without_ even rebooting. And this is the real riot: you can set plenty of control variables in Linux without a reboot which in Solaris REQUIRE a reboot. Just issue "sysctl -p", and the new values in your sysctl.conf will be effective immediately. It's a hell of a lot nicer (my opinion, of course) than having to fire up the kernel debugger (*shudder*) to change dynamic variables, like you have to do in Solaris. To give an example, in Solaris SysV Shared Memory parameters are not dynamic and can only be changed by editing /etc/system and then doing a reboot. In linux, these are tunable on the fly.

    And here follows more displays of fascinating ignorance/misinformation:

    > Individual kernel modules can have their own read-on-module-load-by-the-kernel config file; in Linux the only general way of tweaking modules' control values is by editing the source. /etc/modules.conf will set your read-on-module-load-by-kernel control values. No recompiles needed here either.

    No argument about the mess that is /proc, except to say that /proc is a sometimes _useful_ mess sinnce it allows for tuning things on the fly that Solaris will only allow you to change with a reboot... like the abovementioned SysV shared memory settings.

    No argument about devfs either. This most definitely IS something that solaris does better, and where Linux is catching up.

    My own general impression from working with Linux and Solaris both, however, is that Solaris may be better in a few, small, specific areas mostly relating to really huge boxes, but that Linux stomps big time over Solaris in most areas, including areas where pure ignorance makes Solaris-advocates believe Solaris is superior.

No man is an island if he's on at least one mailing list.

Working...