Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Networking Linux

Linux 3.3: Making a Dent In Bufferbloat? 105

mtaht writes "Has anyone, besides those that worked on byte queue limits, and sfqred, had a chance to benchmark networking using these tools on the Linux 3.3 kernel in the real world? A dent, at least theoretically, seems to be have made in bufferbloat, and now that the new kernel and new iproute2 are out, should be easy to apply in general (e.g. server/desktop) situations." Dear readers: Have any of you had problems with bufferbloat that were alleviated by the new kernel version?
This discussion has been archived. No new comments can be posted.

Linux 3.3: Making a Dent In Bufferbloat?

Comments Filter:
  • What is bufferbloat? (Score:5, Informative)

    by stillnotelf ( 1476907 ) on Wednesday March 28, 2012 @11:44AM (#39497199)
    TFS doesn't mention, and it's hardly an obvious term. From TFA:

    Bufferbloat...is the result of our misguided attempt to protect streaming applications (now 80 percent of Internet packets) by putting large memory buffers in modems, routers, network cards, and applications. These cascading buffers interfere with each other and with the flow control built into TCP from the very beginning, ultimately breaking that flow control, making things far worse than they’d be if all those buffers simply didn’t exist.

  • Re:16550A (Score:5, Informative)

    by Trixter ( 9555 ) on Wednesday March 28, 2012 @12:37PM (#39497827) Homepage

    Floppy disk formatting requires very little CPU resources. You should have had no problem receiving bytes even at 57600 baudrate into a buffer using an 8250 UART (with one byte receive buffer) all the while formatting a floppy disk, even on the original IBM PC.

    ...unless the serial data came in while the floppy interrupt handler was already in progress. In such a situation, the serial handler must wait until the floppy handler is finished, and depending on what the floppy handler is doing, it could take long enough that more serial data would be delayed or lost. And for those of us who tried to do things like download files directly to floppy disks on slower PCs in the 1980s, this was a regular occurrence.

    The 16550A UART's 16-byte buffer meant that several bytes could come in before the serial interrupt needed to be handled again, allowing serial communications to run at full speed for longer time periods before needing to be emptied. This made a world of difference working on slower machines writing to floppies (and faster machines trying to download something in the background while in a multitasking environment).

  • by tepples ( 727027 ) <tepples.gmail@com> on Wednesday March 28, 2012 @12:57PM (#39498085) Homepage Journal

    the alternative is data loss

    TCP was designed to work around this by putting predictably sized retransmit buffers on the endpoints, and then the endpoints would scale their transmission rate based on the rate of packet loss that the host on the other end reports. Bufferbloat happens when unpredictably sized buffers in the network interfere with this automatic rate control.

  • by Richard_J_N ( 631241 ) on Wednesday March 28, 2012 @02:03PM (#39498719)

    Unfortunately, I think you haven't quite got this right.

    The problem isn't buffering at the *ends* of the link (the two applications talking to one another), rather, it's buffering in the middle of the link.

    TCP flow control works by getting (timely notification of) dropped packets when the network begins to saturate. Once the network reaches about 95% of full capacity, it's important to drop some packets so that *all* users of the link back off and slow down a bit.

    The easiest way to imagine this is by considering a group of people all setting off in cars along a particular journey. Not all roads have the same capacity, and perhaps there is a narrow bridge part way along.
    So the road designer thinks: that bridge is a choke point, but the flow isn't perfectly smooth. So I'll build a car-park just before the bridge: then we can receive inbound traffic as fast as it can arrive, and always run the bridge at maximum flow. (The same thing happens elsewhere: we get lots of carparks acting as stop-start FIFO buffers).

    What now happens is that everybody ends up sitting in a car-park every single time they hit a buffer. It makes the end-to-end latency much much larger.

    What should happen (and TCP flow-control will autodetect if it gets dropped packet notifications promptly) is that people know that the bridge is saturated, and fewer people set off on their journey every hour. The link never saturates, buffers don't fill, and nobody has to wait.

    Bufferbloat is exactly like this: we try to be greedy and squeeze every last baud out of a connection: what happens is that latency goes way too high, and ultimately we waste packets on retransmits (because some packets arrive so late that they are given up for lost). So we end up much much worse off.
    A side consequence of this is that the traffic jams can sometimes oscillate wildly in unpredictable manners.

    If you've ever seen your mobile phone take 15 seconds to make a simple request for a search result, despite having a good signal, you've observed buffer bloat.

  • Re:Hm (Score:5, Informative)

    by RulerOf ( 975607 ) on Wednesday March 28, 2012 @02:12PM (#39498833)

    Why would my buffer be never empty? Just because you have more buffers doesn't mean you process anything slower than you have to. It just means if you can't get around to something immediately, you can catch up later.

    That's exactly the problem. TCP relies on packets being dropped in order to manage connections. When buffers are instead allowed to fill up, delaying packets instead of outright dropping them, the application relying on those packets experiences extremely high latency instead of being rate-limited to fit inside of the available bandwidth.

    The problem has come to pass because of how counterintuitive this really is. It's a GOOD THING to discard data you can't transfer RIGHT NOW, rather than wait around and send it later.

    I suppose one of the only analogs I can think of might be the Blackbird stealth plane. Leaks like a sieve on the ground, spitting fuel all over the place, because at altitude the seals expand so much that they'd pop if it hadn't been designed to leak on the ground. Using gigantic packet buffers would be like "fixing" a Blackbird so that it didn't leak on the runway.

  • Re:Hm (Score:4, Informative)

    by Anonymous Coward on Wednesday March 28, 2012 @03:07PM (#39499473)

    Let's assume we do not have TCP first. Assume you have one slowest connection, like the cable from your house to the internet, with both the internet and the net within your house can handle all the load instantly that cable can generate.

    Let's take a look at the case where one person in the house (or one program of the person) has a long download running (like serveral hours). If nothing else is happening, you want to utialize the full cable. Let's assume further that by some way he sending server actually get the speed right, so sends exactly as much as the cable to your house can handle. So far, so go.

    Now some other program or someone else takes a look at some sites or downloads a small file that takes one second over the cable (if it was only this). This causes some additional traffic, so you get more data than your cable can handle. Your ISP has a buffer at its side of the cable, so the packets will end up in the buffer. But the side you are downloading from still sends as much as the cable, so if the buffer before the cable is big enough, you have exactly one second worth of pakets in your data. The download is still running, nothing else is running, the buffer keeps exactly one second of data in the buffer. Everything still arrives, but everything arrives one second later. There is no advantage of the buffer here, everything is still slowed down by a full second. If you would have dropped one second of data, the server would first have to retransmit this, so being a second late. Thus without the buffer, essentially everything would still arrive at the same time, but any other requests going over the same line would be there immidietly and not a full second later.

    Now you won't have something that exactly sends the amount of data your cable can handle. But it will try (everything of course tries to send stuff as good as it can). If they manage to somehow messure your problem and send slower so the buffer empties again, the buffer makes sense. If they can come up with enough data for your buffer to never come empty, your buffer is too big and only creating problems.

    For bigger problems now enter TCP (or anything else that wants to introduce realiability to the internet). Some packets might get lost, so you need to retransmit packets if they get lost. For this you wait a bit on the packet and if it does not arrive, you ask for it to be resent. You cannot wait very long for packets, as the user does not like waiting for a long time if it was part of something interactive. So if a packet only arrives a longer time late, the computer will already have sent a request to have the packet resent. So if you have buffers big enough to cache say a whole second of data, and the buffers are even there both ways, then the buffers might already contain multiple requests to resend the same packet and thus also multiple copies of the packet send out (remember, a request to resend a packet might have got lost, too). So while buffers big enough avoid packets being resent, buffers too big cause uncessesary resends.

    Now enter the specifics of TCP. Sending as fast is possible is solved in TCP by getting faster and faster till you are too fast (detected by too many packets getting lost) and then getting slower again. If you have buffers around that can cope a big amount of data, one side can send too fast quite a long time, while everything will arrive still (sometimes a bit later, but it does). Sot it gets faster and faster and faster. The latency will get bigger and bigger. Finally the buffer will be full, so packets need to be dropped. But you usually drop the packets arriving latest, so once this moment takes place, there can still be a long time before the other side realizes stuff is missing (like, say a whole second, which is half an eternity for a positronic android ^H^H^H^H^H a computer) so the sending side was still speeding the whole time. Now all TCP connections running over that buffer will collapse, causing all of them to go back to a very slow speed and you lost a whole of packets, much more than if you did not have a buffer at all (some of them even multiple times).

  • by nweaver ( 113078 ) on Wednesday March 28, 2012 @03:40PM (#39499911) Homepage

    Buffer bloat infuriates me because it's blitheringly ignorant of implemented research more than a decade old and is allowing people to feel like they're doing something about the problem when really they're just swapping one bad constant for another. It's the wrong prescription. The fact he's gotten this far shows our peer review process is broken.

    Actually, this focus is driven very much by a technical approach. We know it is a problem in the real world due to wide spread, empirical measurements. Basically, for most users, the Internet can't "Walk and chew gum": interactive tasks or bulk data work just fine, but combining bulk data transfer with interactive activity results in a needless world of hurt.

    And the proper solution is to utilize the solutions known in the research community for a decade plus, but the problem is getting AQM deployed to the millions of possible existing bottlenecks, or using 'ugly-hack' approaches like RAQM where you divorce the point of control from the buffer itself.

    Heck, even a simple change to FIFO design: "drop incoming packets when the oldest packet in the queue is >X ms old" [1], that is, sizing buffers in delay rather than capacity, is effectively good enough for most purposes: I'd rather have a good AQM algorithm in my cable modem but, without that, a simple sized in delay buffer gets us 90% there.

    [1] X should be "measured RTT to the remote server", but in a pinch a 100-200ms number will do in most cases.

We are each entitled to our own opinion, but no one is entitled to his own facts. -- Patrick Moynihan

Working...