Linux 3.3: Making a Dent In Bufferbloat? 105
mtaht writes "Has anyone, besides those that worked on byte queue limits, and sfqred, had a chance to benchmark networking using these tools on the Linux 3.3 kernel in the real world? A dent, at least theoretically, seems to be have made in bufferbloat, and now that the new kernel and new iproute2 are out, should be easy to apply in general (e.g. server/desktop) situations."
Dear readers: Have any of you had problems with bufferbloat that were alleviated by the new kernel version?
Yes. (Score:4, Funny)
You name it, it's become bloated: buffers, bellies, butts, pretty much everything.
Re: (Score:3)
Everything gets fatter, hairier, and closer to the ground....
Re: (Score:3)
Everything gets fatter, hairier, and closer to the ground....
The top of my head is less hairy, and about the same width and distance from the ground as it was 20 years ago ...
Re: (Score:3)
Fill your main input buffer with lots of vegetables and you may see the other buffers shrink. :-)
What is bufferbloat? (Score:5, Informative)
Bufferbloat...is the result of our misguided attempt to protect streaming applications (now 80 percent of Internet packets) by putting large memory buffers in modems, routers, network cards, and applications. These cascading buffers interfere with each other and with the flow control built into TCP from the very beginning, ultimately breaking that flow control, making things far worse than they’d be if all those buffers simply didn’t exist.
16550A (Score:5, Insightful)
In my day, if your modem had a 16550A UART protecting you with its mighty 16 byte FIFO buffer protecting you, you were a blessed man. That little thing let you potentially multitask. In OS/2, you could even format a floppy disk while downloading something thanks to that sucker.
Re:16550A (Score:5, Insightful)
Floppy disk formatting requires very little CPU resources. You should have had no problem receiving bytes even at 57600 baudrate into a buffer using an 8250 UART (with one byte receive buffer) all the while formatting a floppy disk, even on the original IBM PC. You'd probably need to code it up yourself, of course. I don't recall BIOS having any buffering UART support, neither do I recall many BIOS implementations being any good at not disabling interrupts.
I wrote some 8250/16550 UART driver code, and everything worked fine as long as you didn't run stupid code that kept interrupts disabled, and as long as the interrupt priorities were set up right. In normal use, the only high-priority interrupts would be UART receive and floppy controller, and sound card if you had one. Everything else could wait with no ill effects.
Re:16550A (Score:5, Informative)
Floppy disk formatting requires very little CPU resources. You should have had no problem receiving bytes even at 57600 baudrate into a buffer using an 8250 UART (with one byte receive buffer) all the while formatting a floppy disk, even on the original IBM PC.
...unless the serial data came in while the floppy interrupt handler was already in progress. In such a situation, the serial handler must wait until the floppy handler is finished, and depending on what the floppy handler is doing, it could take long enough that more serial data would be delayed or lost. And for those of us who tried to do things like download files directly to floppy disks on slower PCs in the 1980s, this was a regular occurrence.
The 16550A UART's 16-byte buffer meant that several bytes could come in before the serial interrupt needed to be handled again, allowing serial communications to run at full speed for longer time periods before needing to be emptied. This made a world of difference working on slower machines writing to floppies (and faster machines trying to download something in the background while in a multitasking environment).
Re:16550A (Score:5, Insightful)
...unless the serial data came in while the floppy interrupt handler was already in progress.
The interrupt handlers are supposed to do minimal amount of work, and relegate the rest to something called bottom half (using Linux parlance). When writing the code, you time it and model the worst case -- for example, a floppy interrupt being raised "right before" the serial input becomes available. If there are cases when it may not work, you absolutely have to have workarounds: either you can redo the floppy operation while losing some performance, or you suspend floppy access while data is coming in, etc. There's no handwaving allowed, in spite of a lot of software being designed just that way.
Re: (Score:2)
Not when you are talking to PD765 or Intel 82072/7, like you would on PCs. Those run their own microcode. You prepare a DMA buffer with sector IDs for each sector in the track, then you fire off a command, and the controller does its thing in the background. It will interrupt when the track has been already formatted. When formatting, the fastest you can go is two revolutions: one for formatting, another one for the seek. Those floppy controllers don't allow you to start formatting mid-track IIRC, although
Re: (Score:1)
Unless you use the amazing 2MGUI which lets you cram more space onto floppy disks, formatting HD 3.5inch floppies to over two megabytes. It drives the floppy directly somehow, chewing up CPU time - which means it works under DOS and WinDOS only.
(There are other programs such the companion 2M which don't require CPU time but 2MGUI is the record holder for highest capacity.)
Re: (Score:2)
There is no such thing as driving a floppy "directly somehow". All it can do is put bytes into a DMA buffer, and then put bytes into the command register area, and the the controller do its thing. 2M was written for DOS, it needs to directly talk to the floppy controller, that's why you need to run it under DOS, not Windows. I doubt it chews up CPU time for anything but polling, it has nothing better to do anyway as DOS is a single tasking OS. While you're formatting a floppy it wouldn't be very useful to l
Re: (Score:1)
Re: (Score:2)
I'm merely stating facts. FIFOs in UARTs and other I/O devices help with performance: you only pay one interrupt entry/exit overhead per FIFO access, not per every byte. A lot of software is poorly written, that's a fact, so just because with as-supplied MS-DOS and BIOS things wouldn't work right doesn't mean that hardware was entirely to blame.
Re: (Score:2)
Error correcting modems have transmit and receiver buffers that are not part of the UART. The USR Courier even had a setting to lower the buffer size and latency for better performance with short block protocols like xmodem.
Re:What is bufferbloat? (Score:5, Interesting)
http://queue.acm.org/detail.cfm?id=2071893 [acm.org] and http://www.bufferbloat.net/projects/bloat/ [bufferbloat.net]
Re: (Score:2)
One way to combat the problems of bufferbloat is for most used/of the websites to add support for SPDY. Using one TCP-connection per website instead of 6 connections per domain with domain sharding of 6 domains helps to reduce the problems. Obviously that doesn't solve P2P.
The Apache module mod_spdy is in beta, the nginx developers mentioned on Twitter they expect to have something in May.
Firefox 11 and Chrome already support it (they use the same SSL/TLS library so it was probably easier to port to Firefox
Re: (Score:2)
But it is disabled by default in Firefox 11 as it is the first release with SPDY. That will probably change in Firefox 12 or Firefox 13.
So, next week then.
Hm (Score:3)
Re:Hm (Score:4, Insightful)
Has there been widespread empirical analysis of bufferbloat?
No - it is a meme started by one guy ( he of X11 protocol fame ) and there is still quite a sceptical audience.
If your TCP flow-control packets are subject to QoS prioritisation ( as they should be ) then bufferbloat is pretty much moot.
Re:Hm (Score:5, Insightful)
People might be sceptical how big the problem is, but the analysis itself and the diagnosis is sound. Most people are only suprised they did not think about it before.
The math is simple: If you have a buffer that is never empty, every package will have to wait in the buffer. If you have a buffer full all the time, it serves no purpose but only defers every packet. And given that RAM got so cheap that buffers in devices grew so much more than bandwidth, you now often have buffers big enough to hold packets needing full seconds to send them all. Such a buffer running in always-full mode means high latency for no gain.
All additional factors of TCP going harvoc when latency is too high and no longer being able to compute how fast to optimally send if no packages get dropped only make the situation worse, but the basic is simple: A buffer always full is a buffer only having downsizes. The more the bigger it is....
Re: (Score:2)
Why would my buffer be never empty? Just because you have more buffers doesn't mean you process anything slower than you have to. It just means if you can't get around to something immediately, you can catch up later.
The problems with bufferbloat seem to me to be at the least greatly exaggerated and poor examples like yours are just one reason why.
Re:Hm (Score:5, Informative)
Why would my buffer be never empty? Just because you have more buffers doesn't mean you process anything slower than you have to. It just means if you can't get around to something immediately, you can catch up later.
That's exactly the problem. TCP relies on packets being dropped in order to manage connections. When buffers are instead allowed to fill up, delaying packets instead of outright dropping them, the application relying on those packets experiences extremely high latency instead of being rate-limited to fit inside of the available bandwidth.
The problem has come to pass because of how counterintuitive this really is. It's a GOOD THING to discard data you can't transfer RIGHT NOW, rather than wait around and send it later.
I suppose one of the only analogs I can think of might be the Blackbird stealth plane. Leaks like a sieve on the ground, spitting fuel all over the place, because at altitude the seals expand so much that they'd pop if it hadn't been designed to leak on the ground. Using gigantic packet buffers would be like "fixing" a Blackbird so that it didn't leak on the runway.
Re:Hm (Score:5, Funny)
A stealth plane analogy. I didn't see that coming!
(very informative post, btw - thank you:)
Re: (Score:2)
The SR-71 is not a stealth aircraft.
Re: (Score:2)
The SR-71 is not a stealth aircraft.
http://en.wikipedia.org/wiki/Lockheed_SR-71_Blackbird#Stealth_and_threat_avoidance [wikipedia.org]
It was an early, not entirely succesfull, attempt at one. It does have a radar cross section significantly smaller than its actual size, which I think qualifies it for the title, even if other more recent designs are much better at it.
Re: (Score:2)
I suppose it's ultimately a matter of opinion, but I don't think it qualifies. The SR-71 was not intended to sneak past radars undetected. And when it's cruising at Mach 3 at 80,000 feet, it will have a large RCS to radars on the ground.
The SR-71's flyovers were not supposed to be secret from the nations they flew over--that's why it was designed to outrun SAMs.
In contrast, the F-117 and B-2 are explicitly intended to fly past radars undetected. That's stealth.
Re:Hm (Score:4, Informative)
Let's assume we do not have TCP first. Assume you have one slowest connection, like the cable from your house to the internet, with both the internet and the net within your house can handle all the load instantly that cable can generate.
Let's take a look at the case where one person in the house (or one program of the person) has a long download running (like serveral hours). If nothing else is happening, you want to utialize the full cable. Let's assume further that by some way he sending server actually get the speed right, so sends exactly as much as the cable to your house can handle. So far, so go.
Now some other program or someone else takes a look at some sites or downloads a small file that takes one second over the cable (if it was only this). This causes some additional traffic, so you get more data than your cable can handle. Your ISP has a buffer at its side of the cable, so the packets will end up in the buffer. But the side you are downloading from still sends as much as the cable, so if the buffer before the cable is big enough, you have exactly one second worth of pakets in your data. The download is still running, nothing else is running, the buffer keeps exactly one second of data in the buffer. Everything still arrives, but everything arrives one second later. There is no advantage of the buffer here, everything is still slowed down by a full second. If you would have dropped one second of data, the server would first have to retransmit this, so being a second late. Thus without the buffer, essentially everything would still arrive at the same time, but any other requests going over the same line would be there immidietly and not a full second later.
Now you won't have something that exactly sends the amount of data your cable can handle. But it will try (everything of course tries to send stuff as good as it can). If they manage to somehow messure your problem and send slower so the buffer empties again, the buffer makes sense. If they can come up with enough data for your buffer to never come empty, your buffer is too big and only creating problems.
For bigger problems now enter TCP (or anything else that wants to introduce realiability to the internet). Some packets might get lost, so you need to retransmit packets if they get lost. For this you wait a bit on the packet and if it does not arrive, you ask for it to be resent. You cannot wait very long for packets, as the user does not like waiting for a long time if it was part of something interactive. So if a packet only arrives a longer time late, the computer will already have sent a request to have the packet resent. So if you have buffers big enough to cache say a whole second of data, and the buffers are even there both ways, then the buffers might already contain multiple requests to resend the same packet and thus also multiple copies of the packet send out (remember, a request to resend a packet might have got lost, too). So while buffers big enough avoid packets being resent, buffers too big cause uncessesary resends.
Now enter the specifics of TCP. Sending as fast is possible is solved in TCP by getting faster and faster till you are too fast (detected by too many packets getting lost) and then getting slower again. If you have buffers around that can cope a big amount of data, one side can send too fast quite a long time, while everything will arrive still (sometimes a bit later, but it does). Sot it gets faster and faster and faster. The latency will get bigger and bigger. Finally the buffer will be full, so packets need to be dropped. But you usually drop the packets arriving latest, so once this moment takes place, there can still be a long time before the other side realizes stuff is missing (like, say a whole second, which is half an eternity for a positronic android ^H^H^H^H^H a computer) so the sending side was still speeding the whole time. Now all TCP connections running over that buffer will collapse, causing all of them to go back to a very slow speed and you lost a whole of packets, much more than if you did not have a buffer at all (some of them even multiple times).
Re: (Score:2)
We did think of this before, though. I remember when I was setting up my first FreeBSD firewall around 2001, and I was annoyed that I couldn't figure out QoS well enough to mitigate it. I noticed that RTT went to shit when the link was saturated, did some quick googling, and read about it on another website, where it was already old news in an outdated HOWTO.
Honestly, the idea that this is some sort of amazing thing some guy noticed is silly--everybody noticed it, and the only exceptional thing is how excep
Re:Hm (Score:5, Interesting)
TCP flow-control makes the assumption that dropped packets and retransmission requests are a sufficient method of feedback. When it doesn't receive them, it assumes everything upstream is going perfectly and keeps sending data over (in whatever amounts your QoS setup dictates, but that's not the point).
Except everything upstream is maybe not going well. Other devices on the way may be masking problems -- instead of saying "I didn't get that, resend", they use their local buffers and tell downstream nothing. So TCP flow-control is being kept in the dark. Eventually a buffer runs out and that's when that device starts asking for a huge pile of fresh data at once... which is not how it was supposed to work. So the speed of the connection keeps fluctuating up and down even though pipes seem clear and underused.
The immediate workaround is to trim buffers down everywhere. Small buffers are good... but they shouldn't grow so much as to take a life of their own, so to speak.
One thing done in Linux 3.3 to start adressing this properly is the ability to express buffer size in bytes (Byte Queue Limits (BQL)). Historically they worked with packet amounts only, because they were designed in an era where packets were small... but nowadays they get big too. It gives us better control and somewhat alleviates the problem.
The even better solution is to make buffers work together with TCP flow-control. That would be Active Queue Management (AQM), which is still being developed. It will basically be an algorithm that decides how to adapt use of buffers to traffic bursts. But a good enough algorithm has not been found yet (there's one but it's not very good). When it will be found it will need testing, then wide-scale deployment and so on. That might still take years.
Re: (Score:2)
Re: (Score:2)
I think it's incredibly naïve to believe that we can, in one atomic action, rip out and replace tcp/ip (or whatever other technology) with something that is "better" for whatever value of the word "better" you assign it to have. An incredible amount of work and research has gone into making things work the way that they do, and not only do they work pretty well, but upgrading them to fix issues like this buffer bloat thing is not some Manhattan Project-esque undertaking, like reengineering the internet would be.
TCP has already been replumbed numerous times since its creation. Take a look at the after-market congestion avoidance algorithms [wikipedia.org] that have been bolted on, or new wire-level features like timestamps [wikipedia.org] (now ~mandatory), window scaling [wikipedia.org] (now ~mandatory), SACK [wikipedia.org], and ECN [wikipedia.org]. If AQM takes off, it'll simply be the latest in a line of fixes that's kept TCP working across 37 years of Moore's Law.
Re: (Score:2)
Already designed. Called ATM AAL5 with early cell discard over an ATM ABR traffic contract. Rejected by the marketplace (minus some niches where its still used.)
Didn't help that the vendors were sluggish implementing the ABR feature, and that the industry didn't realize they needed a loss-leader business strategy to take on IP/ethernet, so they didn't aggressively pursue low-cost hardware or bridge-to solutions like CIF(f.k.a FATE) until it was too late.
(And there have been other attempts as well.)
Really
Re: (Score:2)
If only we could scrap the whole mess and design a solution for the problems we face currently, instead of continuing to use the solution to the problems they faced 40 years ago. Wishful thinking, I know...
There have been many competitors to TCP/IP, but they've all fallen by the wayside because TCP/IP worked better in practice. (I remember OSI networking, but not fondly.) The key is that the internet scales better than the others, and that's made it possible for far more people to be connected to it and that in turn makes it by far the most attractive network to work with in the first place. The killer app of the internet was DNS, and especially its implementation as BIND...
Re: (Score:1)
The even better solution is to make buffers work together with TCP flow-control. That would be Active Queue Management (AQM), which is still being developed.
Forgive the pessimism, but the reaction which suggests itself is, What Could Possibly Go Wrong by adding yet another layer of cruft?
Similar database buffer bloat (Score:5, Interesting)
There is a similar, and well known situation that comes up in database optimization. For example, the Oracle database has over the years optimized its internal disk cache based on its own LRU algorithms, and performance tuning involves a combination of finding the right cache size (there is a point where too much causes performance issues), and manually pinning objects to the cache. If the database is back-ended by a SAN with its own cache and LRU algorithms, you wind up with the same data needlessly cached in multiple places and performance statistics reported incorrectly.
As a result I've run across recommendations from Oracle and other tuning experts to disable the SAN cache completely in favor of the database disk cache. That, or perhaps keep the SAN write dache and disable read cache, because the fact is that Oracle knows better than the SAN the best way to cache data for the application. Add in caching at the application server level, which involves much of the same data, and we have caching of the same information needlessly cached at many tiers.
Then, of course, every vendor at every tier will tell you that you should keep their cache enabled because caching is good and of course it doesn't comflict with other caching, but reality is that caching is not 100% free... there is overhead to manage the LRU chains, do garbage collection, etc. So in the end you wind up dealing with a very similar database buffer bloat issue to Cringely's network buffer bloat. Let's not discount the fact that many serverdisk communications are migrating toward similar communications protocols as networks (NAS, iSCSI, etc). Buffer bloat is not a big deal at home or even a mid-sized corporate intranet, but for super high speed communications like on-demand video, and mission critical multi terrabyte databases, these things matter
Re: (Score:2)
"If your TCP flow-control packets are subject to QoS prioritisation ( as they should be ) then bufferbloat is pretty much moot."
Are you saying backbone routers should implement QoS?
What about how TCP naturally harmonizes when too many connections start to build?
QoS doesn't solve the latency issue, it just pushes the latency down to the "lower priority" streams. It still doesn't solve the issue when thousands of TCP connections harmonize and ramp up all at the same time and fill a buffer until packet loss oc
Yes... (Score:2)
Yes there has.
Unfortunately, the analysis is "its almost all bad". We have seen with Netalyzr some network kit that had properly sized buffers, sized in terms of delay rather than capacity, but the hardware in question (an old Linksys cable modem) was obsolete and when I bought one and plugged it into my connection, I got into the cable company's walled garden of 'your cable modem is too obsolete to be used'.
We would encourage all device manufacturers to test their devices with Netalyzr, it can find a lot
Re: (Score:1)
No, because buffer bloat is nonsense, buffers exist because they are necessary for reliable operation. If buffer use averages more than one packet it is because input bandwidth is greater than output bandwidth, the alternative is data loss.
The buffer bloat guy wants every bit echoed immediately, then the problem will be the propagation of fragments of garbage.
Buffers on the endpoints vs. in the network (Score:5, Informative)
the alternative is data loss
TCP was designed to work around this by putting predictably sized retransmit buffers on the endpoints, and then the endpoints would scale their transmission rate based on the rate of packet loss that the host on the other end reports. Bufferbloat happens when unpredictably sized buffers in the network interfere with this automatic rate control.
Re: (Score:3)
Not only on the network, but also in your OS networking stack and networkcard and wifi drivers.
That is a large part of what they are fixing in Linux.
It is obvious why wifi drivers might have large buffers, retransmission is much more common than on the wire.
Re: (Score:3)
You are the most ignorant poster on slashdot for the week. Congratulations.
(Oh, you're also willfully so, which makes you the single most stupid poster, as well. Congratulations.)
Re: (Score:2)
The ACM Queue did a bit on it last year.
http://queue.acm.org/detail.cfm?id=2071893 [acm.org]
Unstable (Score:1)
Re: (Score:2)
Doesn't v3.3 have to first be installed on ... (Score:4, Insightful)
... routers and gateways to have any effect?
I state the obvious because who's already installing it on any but home routers so soon after release?
Re: (Score:1)
Yes, however at least the newer devices going forward can have it built in. So it is not going to be fixed 'overnight'. It will take years to fix.
Re: (Score:1)
No. 3.3 adds the ability to control YOUR buffer size based on packet size. This is meant to ensure that your buffer doesn't become larger than it needs to be. It would be nice to see this upstream as well, but that will take time. And as for your in-house router; you should be runing something where you control the kernel anyway. At least I do.
Haha what? (Score:3)
Umm, it was only released 9 days ago. Do you really think every server, router, gateway, etc. is upgraded through magic days after a new kernel version is released? Considering most devices will probably never have their devices updated don't you think it's a bit early to be asking this?
WARNING: links to Cringely article (Score:2)
I had this problem too (Score:2)
Then I discovered it was mostly firebug with the network log turned on that ate the memory with every ajax request made by setInterval.
Re: (Score:1)
about:memory
Though the implementation they have now isn't granular enough to set loose specific sets of memory from a plug-in here, or a process there. Just a 'minimize memory' button at the bottom.
As for the OP, I'm gonna go with 'no'.
Re: (Score:1)
See? Any internet related technology article can be used to troll Firefox.
You're wrong, however. The buffers being bloated aren't available to Firefox the way you think they are. We're talking about buffers on network cards not buffers in main memory where Firefox supposedly kills your kittens.
Look, if Firefox hurts you so bad that it's created a compulsive behavior to troll even unrelated articles you should 1. Stop using it, and 2. get help for your compulsive disorder. It's unhealthy for you to c
Re: (Score:2)
Firefox 11 pretty much fixes most outstanding bugs, they are a few in Firefox 12. They've are now busy with the top 100 addons and over 50% of the leaky addons have been fixed.
I've never understood this problem. (Score:5, Insightful)
Buffering serves a purpose where the rate of receiving data is potentially faster than the rate of sending data in unpredictable conditions. A proper event driven system should always be draining the buffer whenever there is data in it that can possibly be transmitted.
Simply increasing the size of a buffer should absolutely not increase the time that data waits in that buffer.
A large buffer serves to minimize potential dropped packets when there is a large burst of incoming data or the transmitter is slow for some reason.
If a buffer actually adds delay to the system because it's always full beyond the ideal, one of two things is done totally wrong:
a) Data is not being transmitted (draining the buffer) when it should be for some stupid reason.
b) The characteristics of the data (average rate, burstiness, etc.), was not properly analyzed and the system with the buffer does not meet its requirements to handle such data.
In the end, it's about bad design and bad programming. It is not about "bigger buffers" slowing things down.
Re: (Score:1)
It seems to me that people blame cheap memory and making larger buffers possible for this problem, but no - if there is a problem, it's from bad programming.
Buffering serves a purpose where the rate of receiving data is potentially faster than the rate of sending data in unpredictable conditions. A proper event driven system should always be draining the buffer whenever there is data in it that can possibly be transmitted.
Simply increasing the size of a buffer should absolutely not increase the time that data waits in that buffer.
A large buffer serves to minimize potential dropped packets when there is a large burst of incoming data or the transmitter is slow for some reason.
If a buffer actually adds delay to the system because it's always full beyond the ideal, one of two things is done totally wrong:
a) Data is not being transmitted (draining the buffer) when it should be for some stupid reason.
b) The characteristics of the data (average rate, burstiness, etc.), was not properly analyzed and the system with the buffer does not meet its requirements to handle such data.
In the end, it's about bad design and bad programming. It is not about "bigger buffers" slowing things down.
The issue is that TCP/IP and similar protocols are designed assuming that when the (small) buffers are full, then the packets get lost/rejected and are resent. With large buffers, this assumption is no longer valid. With only one sender and receiver, this appears to make no difference; however, when multiple connections are ongoing with different volumes and latency requirements, the situation is more complex than your mental model would suggest.
You missed the feedback loop (TCP flow contol) (Score:5, Informative)
Unfortunately, I think you haven't quite got this right.
The problem isn't buffering at the *ends* of the link (the two applications talking to one another), rather, it's buffering in the middle of the link.
TCP flow control works by getting (timely notification of) dropped packets when the network begins to saturate. Once the network reaches about 95% of full capacity, it's important to drop some packets so that *all* users of the link back off and slow down a bit.
The easiest way to imagine this is by considering a group of people all setting off in cars along a particular journey. Not all roads have the same capacity, and perhaps there is a narrow bridge part way along.
So the road designer thinks: that bridge is a choke point, but the flow isn't perfectly smooth. So I'll build a car-park just before the bridge: then we can receive inbound traffic as fast as it can arrive, and always run the bridge at maximum flow. (The same thing happens elsewhere: we get lots of carparks acting as stop-start FIFO buffers).
What now happens is that everybody ends up sitting in a car-park every single time they hit a buffer. It makes the end-to-end latency much much larger.
What should happen (and TCP flow-control will autodetect if it gets dropped packet notifications promptly) is that people know that the bridge is saturated, and fewer people set off on their journey every hour. The link never saturates, buffers don't fill, and nobody has to wait.
Bufferbloat is exactly like this: we try to be greedy and squeeze every last baud out of a connection: what happens is that latency goes way too high, and ultimately we waste packets on retransmits (because some packets arrive so late that they are given up for lost). So we end up much much worse off.
A side consequence of this is that the traffic jams can sometimes oscillate wildly in unpredictable manners.
If you've ever seen your mobile phone take 15 seconds to make a simple request for a search result, despite having a good signal, you've observed buffer bloat.
Re: (Score:1)
Oh come on! The guy says he never understood the problem and then goes on to prove it to us.
For that he gets modded 'Insightful'?
That's low even for /.
oversimplified PR noise ignores decade of research (Score:4, Interesting)
The bufferbloat "movement" infuriates me because it's light on science and heavy on publicity. It reminds me of my dad's story about his buddy who tried to make his car go faster by cutting a hole in the firewall underneath the gas petal so he could push it down further.
There's lots of research on this dating back to the 90's, starting with CBQ and RED. The existing research is underdeployed, and merely shortening the buffers is definitely the wrong move. We should use an adaptive algorithm like BLUE or DBL, which are descendents of RED. These don't have constants that need tuning like queue-length (FIFO/bufferbloat) or drop probability (RED), and they're meant to handle TCP and non-TCP (RTP/UDP) flows differently. Linux does support these in 'tc', but (1) we need to do it by default, not after painful amounts of undocumented configuration, and (2) to do them at >1Gbit/s ideally we need NIC support. FWIH Cisco supports DBL in cat45k sup4 and newer but I'm not positive, and they leave it off by default.
For file sharing, HFSC is probably more appropriate. It's the descendent of CBQ, and is supported in 'tc'. But to do any queueing on cable Internet, Linux needs to be running, with 'tc', *on the cable modem*. With DSL you can somewhat fake it because you know what speed the uplink is, so you can simulate the ATM bottleneck inside the kernel and then emit prescheduled packets to the DSL modem over Ethernet. The result is that no buffer accumulates in the DSL modem, and packets get layed out onto the ATM wire with tiny gaps between them---this is what I do, and it basically works. With cable you don't know the conditions of the wire so this trick is impossible. Also, end users can only effectively schedule their upstream bandwidth, so ISP's need to somehow give you control of the downstream, *configurable* control through reflected upstream TOS/DSCP bits or something, to mark your filesharing traffic differently since obviously we can't trust them to do it.
Buffer bloat infuriates me because it's blitheringly ignorant of implemented research more than a decade old and is allowing people to feel like they're doing something about the problem when really they're just swapping one bad constant for another. It's the wrong prescription. The fact he's gotten this far shows our peer review process is broken.
Re:oversimplified PR noise ignores decade of resea (Score:5, Interesting)
You are correct that replacing one bad constant with another is a problem, though I certainly argue many of our existing constants are egregiously bad and substituting a less bad one makes the problem less severe: that is what the cable industry is doing this year in a DOCSIS change that I hope starts to see the light of day later this year. That can take bloat in cable systems down by about an order of magnitude, from typically > 1 second to of order 100-200ms; but that's not really good enough for VOIP to work as well as it should. The enemy of the good is the perfect: I'm certainly going to encourage obvious mitigation such as the DOCSIS changes while trying encourage real long term solutions, which involve both re-engineering of systems and algorithmic fixes. There are other places where similar "no brainer" changes can help the situation.
I'm very aware of the research over a decade old, and the fact that what exists is either *not available* where it is now needed (e.g. any of our broadband gear, our OS's, etc.), and *doesn't work* in today's network environment. I was very surprised to be told that even where AQM was available, it was often/usually not enabled, for reasons that are now pretty clear: classic RED and derivatives (the most common available) require manual tuning, and if untuned, can hurt you. As you, I had *thought* this problem was a *solved* problem in the 1990's; it isn't....
RED and related algorithms are a dead end: see my blog entry on the topic: http://gettys.wordpress.com/2010/12/17/red-in-a-different-light/ and in particular the "RED in a different light" paper referenced there (which was never formally published, due to reasons I cover in the blog posting). So thinking we just apply what we have today is *not correct*; when Van Jacobson tells me RED won't hack it (which was originally designed by Sally Floyd and Van Jacobson) I tend to believe him.... We have an unsolved research problem at the core of this headache.
If you were tracking kernel changes, you'd see "interesting" recent patches to RED and other queuing mechanisms in Linux; this shows you just how much such mechanisms have been used, that bugs are being found in this day and age in such algorithms in Linux: in short, what we have had in Linux has often been broken, showing little active use.
We have several problems here:
1) basic mistakes in buffering, where semi-infinite statically sized buffers have been inserted in lots of hardware/software. BQL goes a long way toward addressing some of this in Linux (the device driver/ring buffer bufferbloat that is present in Linux and other operating systems).
2) variable bandwidth is now commonplace, in both wireless and wired technologies. Ethernet scales from 10Mbps to 10 or 40Gps.... Yet we've typically had static buffering, sized for the "worst case". So even stupid things like cutting the buffers proportionately to the bandwidth you are operating at can help a lot (similar to the DOCSIS change), though with BQL we're now in a better place than before.
3) the need for an AQM that actually *works* and never hurts you. RED's requirement for tuning is a fatal flaw; and we need an AQM that adapts dynamically over orders of magnitude of bandwidth *variation* on timescales of tens of milliseconds, a problem not present when RED was designed or most of the AQM research of the 1990's done. Wireless was a gleam in people's eyes in that era.
I'm now aware of at two different attempts at a fully adaptable AQM algorithms; I've seen simulation results of one of those which look very promising. But simulations are ultimately a guide (and sometimes a real improving insight): running code is the next steps, and comparison with existing AQM's in real systems. Neither of these AQM's have been published, though I'm hoping to see either/both published soon and their implementation happening immediately thereafter.
So no, existing AQM algorithms won't hack it; the size of t
Re:wifi forward error correction (Score:4, Interesting)
There is one other problem: TCP assumes that dropped packets mean the link is saturated, and backs off the transmit rate. But Wireless isn't like that: frequently packets are lost because of noise (especially near the edge of the range). TCP responds by backing off (it thinks the link is congested) when actually it should be trying harder to overcome the noise. So we get really really poor performance(*).
In this case, I think the kernel should somehow realise that there is "10 MB of bandwidth, with a 25% probability of packets returning". It should do forward-error correction, pre-emptively retransmitting every packet 4x as soon as it is sent. Of course there is a huge difference between the case of lots of users on the same wireless AP, all trying to share bandwidth (everyone needs to slow down), and 1 user competing with lots of background noise (the computer should be more aggressive). TCP flow-control seems unable to distinguish them.
(*)I've recently experienced this with wifi, where the connection was almost completely idle (I was the only one trying to use it), but where I was near the edge of range from the AP. The process of getting onto the network with (DHCP) was so slow that most times it failed: by the time DHCP got the final ACK, NetworkManager had seen a 30 second wait, and brought the interface down! But if I could get DHCP to succeed, the network was usable (albeit very slow).
Re: (Score:1)
Re: (Score:2)
Yes...which is why DHCP shows the problem even more severely. DHCP needs 4 consecutive packets to get through OK, and when the environment is noisy, this doesn't happen. But the same happens for TCP, mitigated (slightly) by TCP having a faster retransmit timeout.
My point still stands:
Symptom: packet loss.
Common cause: link saturation.
Remedy: back off slightly, and hope everyone else also notices.
Symptom: packet loss (indistinguishable from the abov
Re: (Score:2)
DHCP needs 4 consecutive packets to get through OK,
Eh? It needs 4 packets to get through, but doesn't require them all to be consecutive. Lose a dhcp request and the client will retransmit without going back to discover...
Re: (Score:3)
Re: (Score:2)
There is one other problem: TCP assumes that dropped packets mean the link is saturated, and backs off the transmit rate. But Wireless isn't like that: frequently packets are lost because of noise (especially near the edge of the range). TCP responds by backing off (it thinks the link is congested) when actually it should be trying harder to overcome the noise. So we get really really poor performance(*).
In this case, I think the kernel should somehow realise that there is "10 MB of bandwidth, with a 25% probability of packets returning". It should do forward-error correction, pre-emptively retransmitting every packet 4x as soon as it is sent. Of course there is a huge difference between the case of lots of users on the same wireless AP, all trying to share bandwidth (everyone needs to slow down), and 1 user competing with lots of background noise (the computer should be more aggressive). TCP flow-control seems unable to distinguish them.
Shouldn't this be handled at the datalink level by the wireless hardware? If there's transmission errors due to noise, more bits should be dedicated to ECC codes. The reliability is maintained at the expense of (usable) bandwidth and the higher layers of the stack just see a regular link with reduced capacity.
Re: (Score:2)
Shouldn't this be handled at the datalink level by the wireless hardware? If there's transmission errors due to noise, more bits should be dedicated to ECC codes. The reliability is maintained at the expense of (usable) bandwidth and the higher layers of the stack just see a regular link with reduced capacity.
Yes, it certainly should be. But it often isn't.
Incidentally, regular ECC won't help here: adding 1kB of ECC to 1KB of packet doesn't help against a 1ms long burst of interference, which obliterates the whole packet.
Re: (Score:1)
The solution for wireless could be a TCP congestion control change, such as Westwood+ which accounts for bandwidth by delay rather than dropped packets.
But even better is a simple proxy setup. The proxy handles the request at the AP for the client, and retransmits can occur over the much faster wireless link.
It's mostly a cost issue, since only recent APs are powerful enough to run a local caching proxy.
Re:oversimplified PR noise ignores decade of resea (Score:4, Informative)
Buffer bloat infuriates me because it's blitheringly ignorant of implemented research more than a decade old and is allowing people to feel like they're doing something about the problem when really they're just swapping one bad constant for another. It's the wrong prescription. The fact he's gotten this far shows our peer review process is broken.
Actually, this focus is driven very much by a technical approach. We know it is a problem in the real world due to wide spread, empirical measurements. Basically, for most users, the Internet can't "Walk and chew gum": interactive tasks or bulk data work just fine, but combining bulk data transfer with interactive activity results in a needless world of hurt.
And the proper solution is to utilize the solutions known in the research community for a decade plus, but the problem is getting AQM deployed to the millions of possible existing bottlenecks, or using 'ugly-hack' approaches like RAQM where you divorce the point of control from the buffer itself.
Heck, even a simple change to FIFO design: "drop incoming packets when the oldest packet in the queue is >X ms old" [1], that is, sizing buffers in delay rather than capacity, is effectively good enough for most purposes: I'd rather have a good AQM algorithm in my cable modem but, without that, a simple sized in delay buffer gets us 90% there.
[1] X should be "measured RTT to the remote server", but in a pinch a 100-200ms number will do in most cases.
Re: (Score:2)
"The bufferbloat "movement" infuriates me because it's light on science and heavy on publicity."
Of the articles I've read on it, they've been VERY heavy on science.
"merely shortening the buffers is definitely the wrong move"
Who is saying this? The issue that I have read about talks about the HUGE difference in performances of different links. If you have a 10Gb card and have a 1Mb link, the buffers are grossly different in size. To fix TCP, we can't look at packet-loss, we need to look at latency.
The proble