Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Networking Linux

Linux 3.3: Making a Dent In Bufferbloat? 105

mtaht writes "Has anyone, besides those that worked on byte queue limits, and sfqred, had a chance to benchmark networking using these tools on the Linux 3.3 kernel in the real world? A dent, at least theoretically, seems to be have made in bufferbloat, and now that the new kernel and new iproute2 are out, should be easy to apply in general (e.g. server/desktop) situations." Dear readers: Have any of you had problems with bufferbloat that were alleviated by the new kernel version?
This discussion has been archived. No new comments can be posted.

Linux 3.3: Making a Dent In Bufferbloat?

Comments Filter:
  • by tayhimself ( 791184 ) on Wednesday March 28, 2012 @12:05PM (#39497435)
    The acm did a great series on bufferbloat
    http://queue.acm.org/detail.cfm?id=2071893 [acm.org] and http://www.bufferbloat.net/projects/bloat/ [bufferbloat.net]
  • Re:Hm (Score:5, Interesting)

    by Anonymous Coward on Wednesday March 28, 2012 @12:37PM (#39497825)

    If your TCP flow-control packets are subject to QoS prioritisation ( as they should be ) then bufferbloat is pretty much moot.

    TCP flow-control makes the assumption that dropped packets and retransmission requests are a sufficient method of feedback. When it doesn't receive them, it assumes everything upstream is going perfectly and keeps sending data over (in whatever amounts your QoS setup dictates, but that's not the point).

    Except everything upstream is maybe not going well. Other devices on the way may be masking problems -- instead of saying "I didn't get that, resend", they use their local buffers and tell downstream nothing. So TCP flow-control is being kept in the dark. Eventually a buffer runs out and that's when that device starts asking for a huge pile of fresh data at once... which is not how it was supposed to work. So the speed of the connection keeps fluctuating up and down even though pipes seem clear and underused.

    The immediate workaround is to trim buffers down everywhere. Small buffers are good... but they shouldn't grow so much as to take a life of their own, so to speak.

    One thing done in Linux 3.3 to start adressing this properly is the ability to express buffer size in bytes (Byte Queue Limits (BQL)). Historically they worked with packet amounts only, because they were designed in an era where packets were small... but nowadays they get big too. It gives us better control and somewhat alleviates the problem.

    The even better solution is to make buffers work together with TCP flow-control. That would be Active Queue Management (AQM), which is still being developed. It will basically be an algorithm that decides how to adapt use of buffers to traffic bursts. But a good enough algorithm has not been found yet (there's one but it's not very good). When it will be found it will need testing, then wide-scale deployment and so on. That might still take years.

  • by carton ( 105671 ) on Wednesday March 28, 2012 @01:35PM (#39498427)

    The bufferbloat "movement" infuriates me because it's light on science and heavy on publicity. It reminds me of my dad's story about his buddy who tried to make his car go faster by cutting a hole in the firewall underneath the gas petal so he could push it down further.

    There's lots of research on this dating back to the 90's, starting with CBQ and RED. The existing research is underdeployed, and merely shortening the buffers is definitely the wrong move. We should use an adaptive algorithm like BLUE or DBL, which are descendents of RED. These don't have constants that need tuning like queue-length (FIFO/bufferbloat) or drop probability (RED), and they're meant to handle TCP and non-TCP (RTP/UDP) flows differently. Linux does support these in 'tc', but (1) we need to do it by default, not after painful amounts of undocumented configuration, and (2) to do them at >1Gbit/s ideally we need NIC support. FWIH Cisco supports DBL in cat45k sup4 and newer but I'm not positive, and they leave it off by default.

    For file sharing, HFSC is probably more appropriate. It's the descendent of CBQ, and is supported in 'tc'. But to do any queueing on cable Internet, Linux needs to be running, with 'tc', *on the cable modem*. With DSL you can somewhat fake it because you know what speed the uplink is, so you can simulate the ATM bottleneck inside the kernel and then emit prescheduled packets to the DSL modem over Ethernet. The result is that no buffer accumulates in the DSL modem, and packets get layed out onto the ATM wire with tiny gaps between them---this is what I do, and it basically works. With cable you don't know the conditions of the wire so this trick is impossible. Also, end users can only effectively schedule their upstream bandwidth, so ISP's need to somehow give you control of the downstream, *configurable* control through reflected upstream TOS/DSCP bits or something, to mark your filesharing traffic differently since obviously we can't trust them to do it.

    Buffer bloat infuriates me because it's blitheringly ignorant of implemented research more than a decade old and is allowing people to feel like they're doing something about the problem when really they're just swapping one bad constant for another. It's the wrong prescription. The fact he's gotten this far shows our peer review process is broken.

  • by Envy Life ( 993972 ) on Wednesday March 28, 2012 @01:47PM (#39498535)

    There is a similar, and well known situation that comes up in database optimization. For example, the Oracle database has over the years optimized its internal disk cache based on its own LRU algorithms, and performance tuning involves a combination of finding the right cache size (there is a point where too much causes performance issues), and manually pinning objects to the cache. If the database is back-ended by a SAN with its own cache and LRU algorithms, you wind up with the same data needlessly cached in multiple places and performance statistics reported incorrectly.

    As a result I've run across recommendations from Oracle and other tuning experts to disable the SAN cache completely in favor of the database disk cache. That, or perhaps keep the SAN write dache and disable read cache, because the fact is that Oracle knows better than the SAN the best way to cache data for the application. Add in caching at the application server level, which involves much of the same data, and we have caching of the same information needlessly cached at many tiers.

    Then, of course, every vendor at every tier will tell you that you should keep their cache enabled because caching is good and of course it doesn't comflict with other caching, but reality is that caching is not 100% free... there is overhead to manage the LRU chains, do garbage collection, etc. So in the end you wind up dealing with a very similar database buffer bloat issue to Cringely's network buffer bloat. Let's not discount the fact that many serverdisk communications are migrating toward similar communications protocols as networks (NAS, iSCSI, etc). Buffer bloat is not a big deal at home or even a mid-sized corporate intranet, but for super high speed communications like on-demand video, and mission critical multi terrabyte databases, these things matter

  • by jg ( 16880 ) on Wednesday March 28, 2012 @02:15PM (#39498867) Homepage

    You are correct that replacing one bad constant with another is a problem, though I certainly argue many of our existing constants are egregiously bad and substituting a less bad one makes the problem less severe: that is what the cable industry is doing this year in a DOCSIS change that I hope starts to see the light of day later this year. That can take bloat in cable systems down by about an order of magnitude, from typically > 1 second to of order 100-200ms; but that's not really good enough for VOIP to work as well as it should. The enemy of the good is the perfect: I'm certainly going to encourage obvious mitigation such as the DOCSIS changes while trying encourage real long term solutions, which involve both re-engineering of systems and algorithmic fixes. There are other places where similar "no brainer" changes can help the situation.

    I'm very aware of the research over a decade old, and the fact that what exists is either *not available* where it is now needed (e.g. any of our broadband gear, our OS's, etc.), and *doesn't work* in today's network environment. I was very surprised to be told that even where AQM was available, it was often/usually not enabled, for reasons that are now pretty clear: classic RED and derivatives (the most common available) require manual tuning, and if untuned, can hurt you. As you, I had *thought* this problem was a *solved* problem in the 1990's; it isn't....

    RED and related algorithms are a dead end: see my blog entry on the topic: http://gettys.wordpress.com/2010/12/17/red-in-a-different-light/ and in particular the "RED in a different light" paper referenced there (which was never formally published, due to reasons I cover in the blog posting). So thinking we just apply what we have today is *not correct*; when Van Jacobson tells me RED won't hack it (which was originally designed by Sally Floyd and Van Jacobson) I tend to believe him.... We have an unsolved research problem at the core of this headache.

    If you were tracking kernel changes, you'd see "interesting" recent patches to RED and other queuing mechanisms in Linux; this shows you just how much such mechanisms have been used, that bugs are being found in this day and age in such algorithms in Linux: in short, what we have had in Linux has often been broken, showing little active use.

    We have several problems here:
    1) basic mistakes in buffering, where semi-infinite statically sized buffers have been inserted in lots of hardware/software. BQL goes a long way toward addressing some of this in Linux (the device driver/ring buffer bufferbloat that is present in Linux and other operating systems).
    2) variable bandwidth is now commonplace, in both wireless and wired technologies. Ethernet scales from 10Mbps to 10 or 40Gps.... Yet we've typically had static buffering, sized for the "worst case". So even stupid things like cutting the buffers proportionately to the bandwidth you are operating at can help a lot (similar to the DOCSIS change), though with BQL we're now in a better place than before.
    3) the need for an AQM that actually *works* and never hurts you. RED's requirement for tuning is a fatal flaw; and we need an AQM that adapts dynamically over orders of magnitude of bandwidth *variation* on timescales of tens of milliseconds, a problem not present when RED was designed or most of the AQM research of the 1990's done. Wireless was a gleam in people's eyes in that era.

    I'm now aware of at two different attempts at a fully adaptable AQM algorithms; I've seen simulation results of one of those which look very promising. But simulations are ultimately a guide (and sometimes a real improving insight): running code is the next steps, and comparison with existing AQM's in real systems. Neither of these AQM's have been published, though I'm hoping to see either/both published soon and their implementation happening immediately thereafter.

    So no, existing AQM algorithms won't hack it; the size of t

  • by Richard_J_N ( 631241 ) on Wednesday March 28, 2012 @03:03PM (#39499427)

    There is one other problem: TCP assumes that dropped packets mean the link is saturated, and backs off the transmit rate. But Wireless isn't like that: frequently packets are lost because of noise (especially near the edge of the range). TCP responds by backing off (it thinks the link is congested) when actually it should be trying harder to overcome the noise. So we get really really poor performance(*).

    In this case, I think the kernel should somehow realise that there is "10 MB of bandwidth, with a 25% probability of packets returning". It should do forward-error correction, pre-emptively retransmitting every packet 4x as soon as it is sent. Of course there is a huge difference between the case of lots of users on the same wireless AP, all trying to share bandwidth (everyone needs to slow down), and 1 user competing with lots of background noise (the computer should be more aggressive). TCP flow-control seems unable to distinguish them.

    (*)I've recently experienced this with wifi, where the connection was almost completely idle (I was the only one trying to use it), but where I was near the edge of range from the AP. The process of getting onto the network with (DHCP) was so slow that most times it failed: by the time DHCP got the final ACK, NetworkManager had seen a 30 second wait, and brought the interface down! But if I could get DHCP to succeed, the network was usable (albeit very slow).

You have a message from the operator.

Working...