Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Open Source Operating Systems Upgrades Linux

Linux 3.3 Released 314

diegocg writes "Linux 3.3 has been released. The changes include the merge of kernel code from the Android project. There is also support for a new architecture (TI C6X), much improved balancing and the ability to restripe between different RAID profiles in Btrfs, and several network improvements: a virtual switch implementation (Open vSwitch) designed for virtualization scenarios, a faster and more scalable alternative to the 'bonding' driver, a configurable limit to the transmission queue of the network devices to fight bufferbloat, a network priority control group and per-cgroup TCP buffer limits. There are also many small features and new drivers and fixes. Here's the full changelog."
This discussion has been archived. No new comments can be posted.

Linux 3.3 Released

Comments Filter:
  • Yea! (Score:2, Funny)

    Yea!

  • Keep it up. (Score:5, Insightful)

    by Severus Snape ( 2376318 ) on Sunday March 18, 2012 @10:12PM (#39399949)
    The Linux kernel guys show that constant steady frequent releases are the way forwards, note to GNOME and KDE guys, you got it wrong.
    • Re:Keep it up. (Score:5, Insightful)

      by houstonbofh ( 602064 ) on Sunday March 18, 2012 @10:19PM (#39399987)
      That and the kernel guys actually put in features people want and need, not shove unwanted changes down the users thoughts...
      • by MrEricSir ( 398214 ) on Sunday March 18, 2012 @11:17PM (#39400215) Homepage

        shove unwanted changes down the users thoughts...

        You can uninstall GBrain and/or MindKontrol to prevent Gnome and KDE from controlling your thoughts.

    • by Smauler ( 915644 ) on Monday March 19, 2012 @12:53AM (#39400533)

      The Linux kernel guys show that constant steady

      I agree... 1 sec.

      frequent releases are the way forwards, note

      Argh... just got to...

      to GNOME and KDE

      update firefox...

      guys, you

      Again?

      got

      Must

      it

      finish

      wrong.

      comment.

    • The Linux kernel guys show that constant steady frequent releases are the way forwards, note to GNOME and KDE guys, you got it wrong.

      I don't know about gnome, but kde release a new version every six months since 4.0

    • So why then all the hate when Mozilla follows the same release mentality?

  • Recursion (Score:4, Funny)

    by philip.paradis ( 2580427 ) on Sunday March 18, 2012 @10:14PM (#39399961)

    If I deploy a 3.3 guest on a host running 3.3, does it automatically become 3.3 repeating and go on forever?

  • by Anonymous Coward on Sunday March 18, 2012 @10:22PM (#39400001)

    Wow, I had no idea there was work in porting Linux to DSP architectures. That's quite an interesting development. I wonder what the use case is, since DSPs are typically used for very specific, real-time work, not for hosting general-purpose operating systems.

    Also, it's quite surprising to me since as far as I know it's necessary to use TI's compiler to generate C6X code. I found one initiative to port GCC to it, but afaik it didn't get finished. My understanding is that it is no small job to get Linux to compile on non-supported compilers, so I'm interested in the toolchain they are using. For my own work on a C6711, I've been using the TI compiler under Wine. (Which works fine actually, although I had to generate an initial project in CodeComposer to get some of the board-specific support files.)

    • by unixisc ( 2429386 ) on Sunday March 18, 2012 @11:22PM (#39400239)
      Also, isn't TI C6X a VLIW - in which case, it would need some very elaborate state of the art compilers? Anybody writing a compiler for this thing would have to write one that does, in addition to the usual activities, VLIW stuff like register renaming and allocation, branch prediction and speculative execution, and so on. Would GCC (or LLVM/Clang) put that sort of effort into a compiler?
      • by steveha ( 103154 ) on Monday March 19, 2012 @03:50AM (#39401059) Homepage

        The TI C6X line of chips are not only VLIW, they are "DSP" chips, optimized for signal processing operations. Also, this chip has no MMU. Nobody is going to build a tablet computer or any other general-purpose device based on one of these.

        I think for the near term at least, anyone using a TI C6X will be using the TI C compiler. TI has a whole IDE, called Code Composer Studio. [ti.com]

        But now we have the possibility of running Linux on the chip.

        The one time I worked with a TI DSP chip, I didn't really have an operating system. Just a bootstrap loader, and then my code ran on the bare metal, along with some TI-supplied library code. Now I'm working with an Analog Devices DSP chip and it's the same situation. For my current purposes I'm not using any OS at all. But Linux support could potentially be great; for example, if you were using a platform with an Ethernet interface, you could use the Linux networking code; if you were using a platform with USB, you could use Linux USB code and file system code and so on.

        steveha

    • by macshit ( 157376 ) <[snogglethorpe] [at] [gmail.com]> on Monday March 19, 2012 @12:07AM (#39400409) Homepage

      Also, it's quite surprising to me since as far as I know it's necessary to use TI's compiler to generate C6X code. I found one initiative to port GCC to it, but afaik it didn't get finished. My understanding is that it is no small job to get Linux to compile on non-supported compilers, so I'm interested in the toolchain they are using.

      GCC 4.7 (which will be released soonish; it's basically already done) supports the C6X architecture.

      From the GCC 4.7 release notes [gnu.org]:

      New Targets and Target Specific Improvements:
      ...
      C6X

      • Support has been added for the Texas Instruments C6X family of processors.
  • by quantumphaze ( 1245466 ) on Sunday March 18, 2012 @10:27PM (#39400027)

    I just rebooted to apply 3.2.11 :(

  • by markdavis ( 642305 ) on Sunday March 18, 2012 @11:24PM (#39400249)

    It does appear this means the possibility of running of an entire Android "system" and "apps" under a normal Linux desktop/laptop/tablet, but without emulation. Correct? If so, I can see that being a great thing.

  • my own kernel, again. --sigh-- Or at least no more kernel patches until I get a change to review just how much cruft got shoved for Android Support. Fucking Google.

    • It seems pretty clear stuff is not just being shoved in willy-nilly for android. There have been many debates about including this piece or that piece, and if the implementation should be identical to the android version. Many parts are not in yet, and some may not go in at all. The android suspending solution may not ever go in, mainline may eventually get a system that serves the same purpose in a different way, android may eventually support that. LWN and the LKML posts they link to give a pretty good overview short of reading all the code commits.

      • by grouchomarxist ( 127479 ) on Monday March 19, 2012 @12:45AM (#39400519)

        The lwn post is here: https://lwn.net/Articles/472984/ [lwn.net]

        There is a lot of things they're leaving out for the time being.

      • Well... Having taken a brief glance through the 3.3 patch file and the LWN posts I am really disappointed, yet again, that google thinks their code is special. The ashmem code is pretty much a duplicate of existing aync shared memory calls that can associate handles to memory which ashmem cannot. Wavelocks are just god awful but the "possible" upside is that perhaps they can be transmuted into something that makes power management a little better.

        The whole damn thing just makes the hair on the back of my

        • by Kjella ( 173770 ) on Monday March 19, 2012 @04:53AM (#39401241) Homepage

          You gotta wonder what the hell Linus is thinking on this.

          Well, while he's a hard nail on code quality he's always been a pragmatic man. When it's an interface used on hundreds of millions of Android devices it's something worth supporting if he can do it as long as it doesn't interact badly with the mainline code. And that's exactly why something like wakelocks are still out while others are in. I don't think Linus believes in the one perfect system, if he has to support different IPCs then fine but maybe the implementation can share code and work towards supporting several approaches.

          Remember it's not in anybody's interest to diverge just to diverge, it's just that sometimes it's better to do your own thing and show that it works rather than trying to get permission to change an old recipe. A lot of branches have lived in parallel to mainline and eventually gotten merged in as the real needs and differences - not just the NIH and semantics - have emerged. Getting over these hurdles and keeping the kernel from fracturing into smaller branches that each go their separate ways has always one of the true strengths of the project.

  • by Wonko the Sane ( 25252 ) * on Sunday March 18, 2012 @11:40PM (#39400315) Journal

    I've been reading for a year about bufferbloat and all these tools designed to mitigate it but none of the explainations make sense to someone who isn't already a traffic control guru.

    Can someone explain how, if I'm using a typical Linux system as a firewall between my LAN and a cable modem, I should reconfigure that system if I want to not experience bufferbloat?

    • by Maow ( 620678 )

      I've been reading for a year about bufferbloat and all these tools designed to mitigate it but none of the explainations make sense to someone who isn't already a traffic control guru.

      Can someone explain how, if I'm using a typical Linux system as a firewall between my LAN and a cable modem, I should reconfigure that system if I want to not experience bufferbloat?

      Note that I am in no way a network guru / expert, etc. so take my comment with a large dose of salt.

      That said, I don't think there's much you can do in a home environment to mitigate buffer bloat, it's when large ISPs, or other large networks, and backbones interconnect, for the most part.

      I'm not going to say much more at risk of being egregiously wrong. I'll just await someone more knowledgeable to jump in and enlighten us both...

      For anyone reading and is interested in the issue:

      Bufferbloat [wikipedia.org]:

      This problem i

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      You want to limit your outgoing transmission speed using QoS to be just under your outgoing bandwidth limit. This prevents your ISP from buffering traffic and reduces latency, increasing responsiveness to things like incoming SSH connections.

    • Re:Bufferbloat (Score:5, Informative)

      by bytestorm ( 1296659 ) on Monday March 19, 2012 @12:43AM (#39400515)
      There isn't an easy answer to your question. In general, bufferbloat is when you get latency or jitter issues because some network device upstream of you has a large buffer, which it fills before it starts dropping your packets. The dropped packets is how software relying on TCP is notified of network congestion so it knows to throttle back. Other protocols may be affected differently (you might notice VoIP delay or bad lag on your xbox).

      To combat this, the idea is to limit your traffic in buffers you control which are (typically) smaller than your ISP and modem's buffers so the ISP ones stay empty and highly interactive. In general, this means limiting your data rates to lower than your bandwidth and prioritizing packets by interactivity requirements. The linux kernel additions in 3.3 allow you to set your buffer size smaller for the entire interface with the goal being to reduce the delay induced by the linux router/bridge. It also adds the ability to prioritize traffic and limit buffers by cgroup (which is like a process categorization or pool which has certain resource limits), but this isn't particularly helpful in your forwarding situation.

      For my own QoS setup, I usually use a script similar to this HTB one [lartc.org]. It requires some tuning and getting your queue priorities right requires some understanding of the traffic going through your network. A lot of high level netfilter tools (smoothwall, dd-wrt, etc) have easier to use tools QoS tools which may better suit your purposes. Having not used one, I'm not in a position to recommend them.
      • I use Shorewall to configure packet filtering for me which does some QoS support [shorewall.net]. It seems simple enough but I'm not sure how to know if it uses or is affected by the new kernel options. I understand packet filtering a lot better than I understand traffic control.
  • Power Management (Score:4, Interesting)

    by Phantasmagoria ( 1595 ) <loban.rahman+slashdotNO@SPAMgmail.com> on Monday March 19, 2012 @12:06AM (#39400391)

    Any improvements to power management? It pains me that my laptop gets 4 hours battery life when in Windows 7 but only 2 hours when in Linux. In both cases it's just idle with nothing special running in the background. Or is this a problem with the distribution?

  • by Irick ( 1842362 ) on Monday March 19, 2012 @12:20AM (#39400449)
    Seriously, they do some good work. I'm excited to see if this fixes sleep on some of the more obscure devices and gives us better power management.
  • by devphaeton ( 695736 ) on Monday March 19, 2012 @12:37AM (#39400495)

    Back in the Middle Ages (late 1990s through about 2004) I remember us all getting excited for new kernel releases, and then all rushing to download the source and build it. (By 'us' i mean myself and local geek friends, as well as our cohorts on various IRC channels).

    Nowadays with auto-configuring, rolling release desktop distributions being the norm, is kernel building now only done in server room environments and for non-PC hardware?

    This doesn't matter much, I'm just curious.

    • Yeah in the middle ages I was one of those rushing to the source and building it, but not as much anymore. I still rebuild it on my personal machine if I know I'll be using it a while, just to squeeze every last bit I can, but I'll readily admit I don't notice the difference in performance at all. I doubt I'll rebuild for this one as I don't see any features that really apply to me.

      As a personal user, I see fewer reasons to spend a lot of time on kernel tweaking and building, not like it was 10 years ago.

      • by devphaeton ( 695736 ) on Monday March 19, 2012 @01:48AM (#39400699)

        I was trying to remember the last time I built a linux kernel. It's going to be somewhere in the early 2.6.x series, on Debian Sid. Even in those early days I didn't really notice a difference in performance (unless I was compiling in drivers for specific hardware). The kernel image was smaller, and I knew that that was better, but other than that it all ran about the same. I almost wonder if the performance "increase" I saw back in the 2.2 days was all in my head now. I used to see some performance differences in compiled FreeBSD kernels on my really old boxes (300mhz K6-II with 128MB), but I think the differences have gotten smaller and smaller since 4.x days.

        Like Wonko says, it's not a huge bit of effort to build a kernel. But I don't really see a reason to do it. I should give it a shot just for old time's sake, heh.

    • I still compile mine. I use git to download the sources so that's a lot easier now than the tarball and patch method. Compiling and installing a new kernel only requires a few minutes and then a reboot at a convienient time.
    • PHP 5.4 recently was released and it has a really cool new feature. So I did all the hard work of finding a ppa (ubuntu user thingy, stop me if I get to technical) and added it and upgraded. That was pretty hard core! Uber nerd!

      Once, kernel features were desperately needed. Now? Meh, they are probably very nice but I can wait for others to test and add them. Everything just works so why risk breaking it?

      MS has the same problem. XP and even more so Windows 7, just works. So how to sell Windows 8? And Linux a

  • by Elrond, Duke of URL ( 2657 ) <JetpackJohn@gmail.com> on Monday March 19, 2012 @01:03AM (#39400555) Homepage

    I am a bit confused with regards to the new team network driver which is going to eventually replace the current bonding net driver. The kernel newbies page says that it is user-space and uses libteam to do its work, but it also says that this new implementation will be more efficient.

    How is this so? As network throughput keeps increasing, it is important to process each packet as quickly as possible. That's why network drivers and the packet filter are in the kernel. Wouldn't moving the new team/bonding work to user-space mean a lot more data for the kernel to copy back and forth between kernel and user spaces? And wouldn't this hurt efficiency? I'm sure the computer can keep up in most cases, but it seems this will require more CPU time to handle the work.

    Just curious...

    • by Technonotice_Dom ( 686940 ) on Monday March 19, 2012 @02:20AM (#39400783)

      The idea I believe is more that userspace is responsible for handling which device(s) are used for transmission and notifying the kernel, rather than being responsible for the sending of packets themselves. If you've got an active/backup bonding setup, it makes sense to perform connectivity checks from userspace which can be flexible and complex, then notify the kernel to switch or remove devices that have lost connectivity.

      The libteam [github.com] daemon that's in development seems to have a round robin mode planned and I'd hope 802.3ad, but I guess we'll have to wait and see how that works. I'm sure it'll still need kernel support for the bonding implementations, it's just the monitoring and management functions that are being extracted.

Life is a whim of several billion cells to be you for a while.

Working...