Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Networking Linux

Got (Buffer) Bloat? 121

mtaht writes, "After a very intense month of development, the Bufferbloat project has announced the debloat-testing git kernel tree, featuring (via suggestion of Van Jacobson) a wireless network latency smashing algorithm, (called eBDP), the SFB and CHOKe packet schedulers, and a slew of driver level fixes that reduce network latency across the Linux kernel by over 2 orders of magnitude. Got Bloat?"
This discussion has been archived. No new comments can be posted.

Got (Buffer) Bloat?

Comments Filter:
  • Re:what it is (Score:5, Informative)

    by firstnevyn ( 97192 ) on Saturday February 26, 2011 @06:55AM (#35322612)

    It's about the downside of memory becoming cheap causing latency problems with congestion control mechinisms that rely on the endpoints being able to inform the sender when it's sending too fast.

    Jim Getty's research blog entry [] explains the problem in detail.

  • What bufferbloat is (Score:5, Informative)

    by Sits ( 117492 ) on Saturday February 26, 2011 @06:59AM (#35322636) Homepage Journal

    My understanding may not be correct but:

    Bufferbloat (I first came across the term bufferbloat in this blog post by Jim Gettys []) is the nickname that has been given to the high latency that can occur in modern network connections due to large buffers in the network. An example could be the way that a network game on one computer starts to stutter if another computer starts to use a protocol like bittorrent to transfer files on the same network connection.

    The large buffers seem to have arisen from a desire to maximise download throughput regardless of network condition. This can give rise to the situation where small urgent packets are delayed because big packets (which perhaps should not have been sent) are queued up in front of them. The system sending the big packets is not told to stop sending them so quickly because its packets are being delivered...

    The linked article sounds like people have modified the Linux kernel source to allow people who know how to compile their own kernels to test ideas people have had for reducing the bufferbloat effect on their hardware and to report back their results.

    Does this help explain things a bit?

  • by Sits ( 117492 ) on Saturday February 26, 2011 @07:23AM (#35322688) Homepage Journal

    I should have also linked to a definition of bufferbloat by Jim Gettys []. For the curious here's a page of links to bufferbloat resources [] and a 5 minute animation that shows the impact of large buffers on network communication (.avi) [].

  • Re:what it is (Score:5, Informative)

    by shish ( 588640 ) on Saturday February 26, 2011 @07:24AM (#35322692) Homepage
    Most traffic throttling algorithms are based on the idea that the router will say "hey, slow down" if a client overloads it -- but when the router has lots of RAM, there is a tendency for it to just keep accepting and accepting, with the client happily pushing data at full speed, while the router is queuing up the data and only moving it upstream very slowly. Because the queues end up being huge, traffic going through that router gets lagged.
  • by Lennie ( 16154 ) on Saturday February 26, 2011 @07:30AM (#35322704)

    The code changes to the Linux kernel also reduce the size and ill effects of buffers inside the kernel and drivers.

  • Re:what it is (Score:4, Informative)

    by drinkypoo ( 153816 ) <> on Saturday February 26, 2011 @08:23AM (#35322850) Homepage Journal

    Oh come on, you only have to follow four links to get to the definition []. What are you, lazy?

    Seriously, this is a true failure of web design. You click from the summary, then you go to the wiki, then you go to the faq, and the faq doesn't even tell you, it references a blog post.

  • Re:what it is (Score:3, Informative)

    by Anonymous Coward on Saturday February 26, 2011 @09:16AM (#35323006)

    oblig. car analogy, by Eric Raymond no less: []

    == Packets on the Highway ==

    To fix bufferbloat, you first have to understand it. Start by
    imagining cars traveling down an imaginary road. They're trying to get
    from one end to the other as fast as possible, so they travel nearly
    bumper to bumper at the road's highest safe speed.

  • by mtaht ( 603670 ) on Saturday February 26, 2011 @10:52AM (#35323412) Homepage
    The core work where we saw latency under load drop by 2 orders of magnitude was in the wireless driver stack on Linux. Examples were the iwl driver (130ms to ~2), and the ath9k driver ( > 200ms to ~d) (and these numbers were for GOOD connections, at high wifi rates. You can get 3 orders of magnitude improvement if you are on a slow wifi connection.) There's a new rate sensitive algorithm for wireless (eBDP) that we are trying in this kernel tree. It's not fully baked yet. 802.11n wireless package aggregation is HARD. That said there's bloat in all the other wired drivers too. We are doing far too much uncontrolled buffering in the kernel - specifically the dma tx ring on many devices - for slower networks. As one example, A gigE interface, connnected to a 3Mbit cable modem - does bad, subtle, things to the stack.
  • by Predius ( 560344 ) < minus punct> on Saturday February 26, 2011 @11:32AM (#35323660)

    I think what you were seeing was more due to ATM overhead than the DSLAM trying to be cute with throttling. Because ADSL encapsulates everything in ATM even small IP / Ethernet frames get broken up into lots of ATM cells which can add upwards of 20% overhead. So an ADSL line trained at 8Mb/s will never provide 8Mb/s of usable throughput to the end user. Some ISPs actually advertise targeted throughput instead of train rate and set the train rate a certain percentage above the target throughput to compensate. Others just advertise train rates and have disclaimers in the fine print.

    I've had my hands inside most Gen 1, 1.5 and 2nd Gen DSLAMs and never seen any with automatic throttling like you described.

    (Gen 1 being units that just function at the ATM layer requiring an external system to bridge to Ethernet or IP. Gen 1.5's being upgraded Gen 1s with crude bridging, and Gen 2 being units that were designed to terminate connections directly from the ground up.)

  • Re:Latency again (Score:4, Informative)

    by rcpitt ( 711863 ) on Saturday February 26, 2011 @02:11PM (#35324766) Homepage Journal
    I deal with streaming video daily - from a producer, distributor and support point of view.

    "Why does the web site load so slowly?" is the classic question - caused in many cases by the "eagleholic" having 4 live eagle nest video streams running in one window while trying to post observations and screencaps to the web site in another.

    Believe me - there is ample reason to deal with the problem as most of today's home networks are used for more than just one thing at a time. Mom is watching video, sis is uploading pictures of her party to Facebook, son is playing online games and dad is trying to listen to streaming audio - and NOTHING is working correctly despite the fact that this is a trivial load for even a T1 (1.45Mbps) let alone today's high-speed cable (30Mbps down and 5Mbps up). We used to run 30+ modems and web sites and email and all manner of stuff over bonded 56K ISDN lines for pity sake - and we got better latency than the links today.

    What's the problem? The latency for the "twitch" game packets has gone from 10ms to 4000ms or more - and the isochronos audio stream is jerky because it's bandwidth starved and the upload takes forever because the ACKs from FB can't get through the incoming video dump from YouTube (with its fast start window pushed from default 3 to 11 or 12) and by the time the video is half over, the link to YouTube has dropped because it took 30 seconds or more for the buffer to drain after the first push and the link had timed out.

    That's the problem - you need low latency for some things at the same time you need high throughput for others - and it is possible and can be done - and IS done if things are tuned correctly. But correctly tuning the use of buffers is an art today, not a science - and the ever-changing (by 3-4 orders of magnitude) needs of today's end-point routers has pushed the limits of what AQM (automated queue management) algorithms are currently available, even if they're turned on (which in most cases they're not it seems)

  • Re:what it is (Score:4, Informative)

    by mtaht ( 603670 ) on Saturday February 26, 2011 @05:58PM (#35326380) Homepage

    re:"Interesting problem for dd-wrt"

    We are throwing efforts at both the mainline kernel and openwrt.
    Openwrt is foundational for dd-wrt and several other (commercial) distributions of Linux on the router. I have a large set of debloated routers already, I'm just awaiting further work on the eBDP algorithm to make better.... []

    re: "using pings"
    httpping is a much saner approach than ping, in many cases. Get it from: []

    re: RED & AQM

    SFB and CHOKEe are in the debloat-testing kernel, as is eBDP.
    RED 93 isn't going to work. nRED may. Experimentation and scripts highly desired. See the bloat and bloat-devel mailing lists for discussions.

    Also: []


    I've seen some VERY interesting behavior with tcp vegas over bloated connections. []

  • Corrected Git URL (Score:4, Informative)

    by JamieF ( 16832 ) on Sunday February 27, 2011 @12:42AM (#35328518) Homepage

    The link in the /. story to debloat-testing should go here: git:// [git].

    git:gitinfradeadorgdebloat-testinggit is not a valid URL.

Nothing makes a person more productive than the last minute.