Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Networking Linux

Linux 3.3: Making a Dent In Bufferbloat? 105

mtaht writes "Has anyone, besides those that worked on byte queue limits, and sfqred, had a chance to benchmark networking using these tools on the Linux 3.3 kernel in the real world? A dent, at least theoretically, seems to be have made in bufferbloat, and now that the new kernel and new iproute2 are out, should be easy to apply in general (e.g. server/desktop) situations." Dear readers: Have any of you had problems with bufferbloat that were alleviated by the new kernel version?
This discussion has been archived. No new comments can be posted.

Linux 3.3: Making a Dent In Bufferbloat?

Comments Filter:
  • 16550A (Score:5, Insightful)

    by Anonymous Coward on Wednesday March 28, 2012 @11:52AM (#39497293)

    In my day, if your modem had a 16550A UART protecting you with its mighty 16 byte FIFO buffer protecting you, you were a blessed man. That little thing let you potentially multitask. In OS/2, you could even format a floppy disk while downloading something thanks to that sucker.

  • by Nutria ( 679911 ) on Wednesday March 28, 2012 @12:06PM (#39497455)

    ... routers and gateways to have any effect?

    I state the obvious because who's already installing it on any but home routers so soon after release?

  • Re:Hm (Score:4, Insightful)

    by Anonymous Coward on Wednesday March 28, 2012 @12:07PM (#39497469)

    Has there been widespread empirical analysis of bufferbloat?

    No - it is a meme started by one guy ( he of X11 protocol fame ) and there is still quite a sceptical audience.

    If your TCP flow-control packets are subject to QoS prioritisation ( as they should be ) then bufferbloat is pretty much moot.

  • Re:16550A (Score:5, Insightful)

    by tibit ( 1762298 ) on Wednesday March 28, 2012 @12:15PM (#39497565)

    Floppy disk formatting requires very little CPU resources. You should have had no problem receiving bytes even at 57600 baudrate into a buffer using an 8250 UART (with one byte receive buffer) all the while formatting a floppy disk, even on the original IBM PC. You'd probably need to code it up yourself, of course. I don't recall BIOS having any buffering UART support, neither do I recall many BIOS implementations being any good at not disabling interrupts.

    I wrote some 8250/16550 UART driver code, and everything worked fine as long as you didn't run stupid code that kept interrupts disabled, and as long as the interrupt priorities were set up right. In normal use, the only high-priority interrupts would be UART receive and floppy controller, and sound card if you had one. Everything else could wait with no ill effects.

  • Re:Hm (Score:5, Insightful)

    by nosh ( 213252 ) on Wednesday March 28, 2012 @12:17PM (#39497593)

    People might be sceptical how big the problem is, but the analysis itself and the diagnosis is sound. Most people are only suprised they did not think about it before.

    The math is simple: If you have a buffer that is never empty, every package will have to wait in the buffer. If you have a buffer full all the time, it serves no purpose but only defers every packet. And given that RAM got so cheap that buffers in devices grew so much more than bandwidth, you now often have buffers big enough to hold packets needing full seconds to send them all. Such a buffer running in always-full mode means high latency for no gain.

    All additional factors of TCP going harvoc when latency is too high and no longer being able to compute how fast to optimally send if no packages get dropped only make the situation worse, but the basic is simple: A buffer always full is a buffer only having downsizes. The more the bigger it is....

  • by LikwidCirkel ( 1542097 ) on Wednesday March 28, 2012 @12:54PM (#39498041)
    It seems to me that people blame cheap memory and making larger buffers possible for this problem, but no - if there is a problem, it's from bad programming.

    Buffering serves a purpose where the rate of receiving data is potentially faster than the rate of sending data in unpredictable conditions. A proper event driven system should always be draining the buffer whenever there is data in it that can possibly be transmitted.

    Simply increasing the size of a buffer should absolutely not increase the time that data waits in that buffer.

    A large buffer serves to minimize potential dropped packets when there is a large burst of incoming data or the transmitter is slow for some reason.

    If a buffer actually adds delay to the system because it's always full beyond the ideal, one of two things is done totally wrong:
    a) Data is not being transmitted (draining the buffer) when it should be for some stupid reason.
    b) The characteristics of the data (average rate, burstiness, etc.), was not properly analyzed and the system with the buffer does not meet its requirements to handle such data.

    In the end, it's about bad design and bad programming. It is not about "bigger buffers" slowing things down.
  • Re:16550A (Score:5, Insightful)

    by tibit ( 1762298 ) on Wednesday March 28, 2012 @02:54PM (#39499325)

    ...unless the serial data came in while the floppy interrupt handler was already in progress.

    The interrupt handlers are supposed to do minimal amount of work, and relegate the rest to something called bottom half (using Linux parlance). When writing the code, you time it and model the worst case -- for example, a floppy interrupt being raised "right before" the serial input becomes available. If there are cases when it may not work, you absolutely have to have workarounds: either you can redo the floppy operation while losing some performance, or you suspend floppy access while data is coming in, etc. There's no handwaving allowed, in spite of a lot of software being designed just that way.

"Engineering without management is art." -- Jeff Johnson

Working...