Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Networking Linux

Got (Buffer) Bloat? 121

mtaht writes, "After a very intense month of development, the Bufferbloat project has announced the debloat-testing git kernel tree, featuring (via suggestion of Van Jacobson) a wireless network latency smashing algorithm, (called eBDP), the SFB and CHOKe packet schedulers, and a slew of driver level fixes that reduce network latency across the Linux kernel by over 2 orders of magnitude. Got Bloat?"
This discussion has been archived. No new comments can be posted.

Got (Buffer) Bloat?

Comments Filter:
  • what it is (Score:1, Insightful)

    by leaen ( 987954 )
    It is good to write in summary what program nobody hear before actually do
    • Re:what it is (Score:5, Informative)

      by firstnevyn ( 97192 ) on Saturday February 26, 2011 @05:55AM (#35322612)

      It's about the downside of memory becoming cheap causing latency problems with congestion control mechinisms that rely on the endpoints being able to inform the sender when it's sending too fast.

      Jim Getty's research blog entry [wordpress.com] explains the problem in detail.

      • Re: (Score:3, Informative)

        by Anonymous Coward

        oblig. car analogy, by Eric Raymond no less:
        https://lists.bufferbloat.net/pipermail/bloat/2011-February/000050.html [bufferbloat.net]

        == Packets on the Highway ==

        To fix bufferbloat, you first have to understand it. Start by
        imagining cars traveling down an imaginary road. They're trying to get
        from one end to the other as fast as possible, so they travel nearly
        bumper to bumper at the road's highest safe speed.
        [snipped]

    • What bufferbloat is (Score:5, Informative)

      by Sits ( 117492 ) on Saturday February 26, 2011 @05:59AM (#35322636) Homepage Journal

      My understanding may not be correct but:

      Bufferbloat (I first came across the term bufferbloat in this blog post by Jim Gettys [wordpress.com]) is the nickname that has been given to the high latency that can occur in modern network connections due to large buffers in the network. An example could be the way that a network game on one computer starts to stutter if another computer starts to use a protocol like bittorrent to transfer files on the same network connection.

      The large buffers seem to have arisen from a desire to maximise download throughput regardless of network condition. This can give rise to the situation where small urgent packets are delayed because big packets (which perhaps should not have been sent) are queued up in front of them. The system sending the big packets is not told to stop sending them so quickly because its packets are being delivered...

      The linked article sounds like people have modified the Linux kernel source to allow people who know how to compile their own kernels to test ideas people have had for reducing the bufferbloat effect on their hardware and to report back their results.

      Does this help explain things a bit?

      • by MichaelSmith ( 789609 ) on Saturday February 26, 2011 @06:13AM (#35322668) Homepage Journal

        high latency that can occur in modern network connections due to large buffers in the network

        Nobody ever explained this to me but I was using ping to measure latency on a network where I was actually most interested in ssh. Ping times went something like 10ms, 50ms, 90ms, 130ms... up to about 500ms, then started again at 10ms, 50ms and so on. Maybe some of my pings shared a buffer with a large, periodic data transfer and when that transfer filled a buffer somewhere my latency dropped.

        I am pretty sure the people actually operating the WAN in question had no idea what was going on either.

      • by Sits ( 117492 ) on Saturday February 26, 2011 @06:23AM (#35322688) Homepage Journal

        I should have also linked to a definition of bufferbloat by Jim Gettys [wordpress.com]. For the curious here's a page of links to bufferbloat resources [bufferbloat.net] and a 5 minute animation that shows the impact of large buffers on network communication (.avi) [bufferbloat.net].

      • by Lennie ( 16154 ) on Saturday February 26, 2011 @06:30AM (#35322704)

        The code changes to the Linux kernel also reduce the size and ill effects of buffers inside the kernel and drivers.

      • My home internet connections have previously suffered from enormous buffers in the DSLAM - setting off a big download could cause ping times to increase to about 2 seconds, rendering other interactive use of the connection impossible.

        Still, either this has been fixed now or more modern versions of TCP are more sophisticated, because it doesn't seem to happen any more - at least not to the same degree.

    • Re:what it is (Score:5, Informative)

      by shish ( 588640 ) on Saturday February 26, 2011 @06:24AM (#35322692) Homepage
      Most traffic throttling algorithms are based on the idea that the router will say "hey, slow down" if a client overloads it -- but when the router has lots of RAM, there is a tendency for it to just keep accepting and accepting, with the client happily pushing data at full speed, while the router is queuing up the data and only moving it upstream very slowly. Because the queues end up being huge, traffic going through that router gets lagged.
      • Re:what it is (Score:5, Insightful)

        by TheLink ( 130905 ) on Saturday February 26, 2011 @07:06AM (#35322796) Journal
        IMO it's fine for buffers to be very big.

        What routers should do is keep track of how long packets have been in the router (in milliseconds or even microseconds) and use that with QoS stuff (and maybe some heuristics) to figure out which packets to send first, or to drop.

        For example, "bulk/throughput" packets might be kept around for hundreds of milliseconds, but while latency sensitive packets get priority they are dropped if they cannot be sent within tens of milliseconds (then the sender will faster realize that it should slow down).
        • Re: (Score:3, Insightful)

          by Brian Feldman ( 350 )

          That's a much more complex solution than "don't buffer so much damn stuff for no good reason."

          • by maswan ( 106561 )

            But you do need big buffers to be able to do fast single tcp transfers! You need at least rtt * bandwidth in buffer in any place that has a faster uplink than downlink, like distribution switches for instance. And that's several megabytes, per port, in the today's gigabit ethernet world. Otherwise you're going to get bad to horrible throughput for high latency transfers.

            Now, big buffers also need a decent buffer management (just trivial RED is orders of magnitudes better than "lets just fill the buffer up a

            • by mtaht ( 603670 )
              you are conflating - to some extent - two things that a lot of people get mixed up. 1) on the TCP/ip sending and receiving side of the hosts, you already have very big, dynamic buffers in the stack for managing BDP. In this case, without very smart queue management, the TX_QUEUE and the DMA TX ring are completely "extra", and mess up the BDP calculation. There are no "extra buffers" in the TCP equations for the host side. 2) on switches and routers, large receive buffers are ok, for BURSTS, with queue ma
              • by amorsen ( 7485 )

                The problem with switches is that most switches have not merely small buffers, which would be ok, but microscopic ones. E.g. Cisco 3560G loses traffic on a gigabit port when faced with 50Mbps of bursty traffic in total coming from two ports. 10ms of buffer at 1Gbps is ~1MB, and most switches have nothing near that per port.

              • This is an important point - and one most people are confused about. I'd like to add a nuance: my understanding is TX buffers are OK IF the router is very smarting about its queuing algorithm within the buffer, so that it drops packets early for any given sender so that the senders don't mistake a large buffer for a large pipe and overspeed the transmission. I believe Gettys &c just released (this article) is such a router/buffer queuing algorithm - smart enough to be effectively utilized with large buf

            • Have you done the research to see just who you're disagreeing with about this?

              And why they engineered TCP the way they did?

              I won't pretend that I've walked through the experiments to try to verify their conclusions. I'm not even sure I know enough to interpret to interpret But...the people shouting the warnings aren't your average Chicken Littles.

              • Have you considered they still could be wrong?

                Personally I've never seen a buffer in software designed they way they described. I've never heard of hardware acting that way, but as you said they certainly know more than I.

                I stopped reading when they said 'it waits for the buffer to fill up until sending', which is true on a per packet level for a lot of things at a end point, in transit everything I've ever dealt with will forward packets without waiting UNTIL the output link becomes too congested to do so

          • Your alternative isn't as simple as you'd like when you have many self interested clients playing zero sum competition over the router's bandwidth.

        • by Anonymous Coward
          http://en.wikipedia.org/wiki/Active_queue_management [wikipedia.org] Implemented in most of the data-driven parts of e.g. 3G and 4G networks. Another aspect: http://www.akamai.com/ericsson [akamai.com]
          • by jimrthy ( 893116 )
            AQM is one of the first steps in fixing the problem. It's still just a start. This is a big hairy monster with sharp, pointy teeth.
        • That is indeed part of the solution Jim Gettys suggests - Active Queue Management or Random Early Detect.

          The first problem is that a ton of transit systems on the Internet (like indeed a ton of systems everywhere) are effectively running the default behaviors in this respect, with no special tuning. That means FIFO with whatever queue size is available.

          The second is that even if all the ISP operators decided to fix this, "QoS stuff" has the potential to run afoul of Network Neutrality. The current thi

          • by TheLink ( 130905 )
            RED is random.

            I'm proposing they use an AQM algorithm that isn't that stupid/random but rather based on the QoS AND _age_of_packet_. The latter I believe is important.

            One can determine the QoS by fields in the packet header and/or guessing.

            Guessing isn't necessarily that difficult or error prone - latency sensitive stuff uses mostly small packets (because bigger packets = higher latency). And high throughput stuff uses mostly big "max size" packets.

            With my proposal if say a 1Mbps ADSL user gets a quick burs
          • You may be right about some of your points, but (if I understand your point) you're wrong about the QoS and net neutral stuff. FCC has never indicated that net neutrality regulations will impinge on "reasonable network management" practices. If folks need to route certain kinds of packets or manage certain kinds of buffers in specific ways to get performant networks, that's just great as far they're concerned (or anyone else with a legislative/regulatory angle I've ever read about or talked with).

            I correspo

            • Indeed, when I referenced "net neutrality", I wasn't referring to the specific implementation by the FCC, but rather the concept itself. The actual language does include an exception for "reasonable network management", but many network neutrality proponents were (I think somewhat rightfully) concerned at the size and flexibility of such a loophole.

              Several providers saw their networks being loaded up with bit-torrent, and believed that limiting that specific protocol would constitute "reasonable network

              • "verified legitimate voice gateways" are anti-competitive practice, and port based anything is discrimination against destination and source, not to mention that NAT would screw up that scheme totally.
          • QoS policies can be written neutral, or at least reasonably so, provider abuse is a market problem, not a technical one. Also, a genetic timing algorithm (coming to all FLOSS apps in 3...2...1...), can and will screw up anything at all that tries to do something sneakier than bandwidth*latency*jitter=const, so let them shoot themselves in the foot. Unless they do true destination based scheduling, they won't be able to screw with anything. But QoS can't save you from bufferbloat. ECN can. Simple standard so
            • I understand that QoS policies can be written neutral, my concern is that they won't be. Dismissing the actions of ISPs and carriers as "a market problem" doesn't really constitute useful policy. Similarly, ignoring all network hosts who aren't FLOSS apps written after 2011 is also not helpful.

              The issues being examined in the bufferbloat discussions are not about whether there is a technically possible solution somewhere in the world (indeed, both ECN and AQM have been around for a while). The issues

        • What you propose is just dancing around the problem. The slow hop has finite throughput. You either tell the sender to send only as much as the pipe can transfer, or you force the limit on the sender by queueing things which in turn increases latency, which in turn decreases the transfer's bandwidth.
        • Actually - unless I'm less literate than I thought I was - that's what these kernel patches, driver patches, and utilities are meant to do. The kernel, as well as the drivers, are being patched to do just about what you are saying. And, let's remember - most routers and/or modems have Linux inside. So, most routers and/or modems can probabaly be "fixed" with a firmware update.
      • by Idbar ( 1034346 )
        Yes, perhaps you have read already Jain's "Myths About Congestion Management in High-Speed Networks". A paper from the early 90s saying how mainly increased transfer speeds and cheap memory would not ease the need of better congestion control mechanism. The problem here is which one is best, how to pick it and mainly, that there's some need that routers also play a role in the problem

        Particularly, with carriers throwing bandwidth at the core, this should be an interesting project for DD-WRT, since gateway
      • 1. Does said router properly respect QoS when deciding what data gets "rushed"?

        2. Does said client have to pay a premium when sending out packets with elevated QoS?

      • by ischorr ( 657205 )

        It's a myth that routers have the ability to tell a client to slow down, at least in the majority of environments (particularly ones with Ethernet segments, but other network types too).

        Ethernet Flow Control has very limited utility here. You'll see it kick in in a rare few congestion cases - like if a switch backplane becomes overloaded - but it is used in a very limited number of situations and definitely will not be used end-to-end on a network for a router or host to tell another host (client) to do sl

        • by ischorr ( 657205 )

          And you know, in my experience that's where one of the real problems - and one of the most commonly undiagnosed problems - exists. In nearly 99% of networks I've looked at where buffer overflows were occurring and drops were happening, network admins were not only unaware of the severity of packet drops and didn't understand the impact this was having on their *critical* workloads, they had no idea how to even look for it.

        • It's called ECN and nobody is arsed to use it.
    • Re:what it is (Score:4, Informative)

      by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Saturday February 26, 2011 @07:23AM (#35322850) Homepage Journal

      Oh come on, you only have to follow four links to get to the definition [wordpress.com]. What are you, lazy?

      Seriously, this is a true failure of web design. You click from the summary, then you go to the wiki, then you go to the faq, and the faq doesn't even tell you, it references a blog post.

      • by mtaht ( 603670 )
        The web site has only been up for a month, and I agree with you very much that it is hard to get to the core ideas of bufferbloat from the get-go, we are incorporating information from dozens of very large blog posts, and hundreds of comments, and have been very busy (among other things) getting hardware running and kernel patches done. bufferbloat.net IS a wiki, however, and registration is open to all. If you can help improve the quality, PLEASE join and do so.
        • by jimrthy ( 893116 )
          If you're one of the ones behind bufferbloat.net (or even just one of the contributors), I want to say "Thank you."
          • by mtaht ( 603670 )
            Well... yes, I'm one of the first 144 on bufferbloat.net But jg defined Bufferbloat so well, that the packet traces I'd seen on my wisp6 design in South America, suddenly made complete and total sense, when I first saw his back in November. I'd had no idea what I was dealing with was actually a worldwide probem. but thank you for the thanks. I blush.
  • by Anonymous Coward

    Instead of using TCP/IP (bastardized version of ISO), people should start using real OSI implementations such as the ISO protocol, with 20 byte addresses and QoS level settings for each of the 7 OSI layers.

    Once upon a time it was an issue of cost of h/w logic, IP was the cheaper alternative, today the difference is nil and the benefits of ISO are orders of magnitude better than IP.

    bufferbloat, IP address exhaustion, etc are just a few of the reason why we should drop IP altogether.

    • The difference is that you can write an smtp server by reading in strings line by line and treating them as commands, then watch the logs and kludge it until it seems to interoperate well enough. With the OSI way of doing things you have to wear a blue tie for a start then you have to print out all the interface definition documents and spread them out on your desk and write the software to the interface.

      You correctly point out that IP is cheaper, but that means all the people who work with it will be cheap

      • Re: (Score:3, Funny)

        by firstnevyn ( 97192 )

        The difference is that you can write an smtp server by reading in strings line by line and treating them as commands, then watch the logs and kludge it until it seems to interoperate well enough. With the OSI way of doing things you have to wear a blue tie for a start then you have to print out all the interface definition documents and spread them out on your desk and write the software to the interface.

        man.. I want your desk if you can spread out all the iso interface definition documents on it and be able to read them

      • by PPH ( 736903 )

        I'm not sure if this has anything to do with it (or we were just victims of slick salespeople):

        Bach in the early days of networking, when I was at Boeing, we (engineering) were starting to write some client-server stuff. Every time our IT folks approached us with ISO/OSI networking products as recommendations, there always seemed to be licensing fees attached. Per seat, per process, per user, per CPU, per whatever. While the software gurus were negotiating licenses and contracts, we just said "Screw it. Gi

    • by Anonymous Coward

      Erm, that doesn't seem to be the problem here IMHO & as per RTFA the problem is a maladaptive response to packet loss by throwing cheap memory into dumb buffers that effectively break the whole packet loss concept. Packet loss is not the enemy of throughput it is 'big idea' behind maintaing it. But sitting in a bloated buffer is the enemy of throughput, seeing the Internet as as series of sealed pipes is the enemy of throughput, missing the point completely and connecting huge dumb buffers into you OS y

      • by jimrthy ( 893116 )

        Why is this only rated a 1?

        This may be the best summary of the problem that I've seen yet.

    • by Colin Smith ( 2679 ) on Saturday February 26, 2011 @06:40AM (#35322730)

      A lot of our problems today would not be here if.

      OSI stack instead of TCP/IP.
      DCE & DFS instead of passwd/whatever + the bastard abomination which is NFS.

      Meh. People are lazy and cheap. Free with the network effect always wins. The Lowest Common Denominator. It's going to take another 15 years before we are near where we were 15 years ago. But this time it will be in Java!
       

      • OSI stack instead of TCP/IP

        Can you please elaborate?

        • Call me an idiot, but I thought TCP/IP was part of the OSI stack.

          I'd also like to hear an explanation.

          • Well, no. In my (limited) experience, you'd use CLNP [wikipedia.org] instead of IP if you were using OSI. And instead of IP addresses you would have NSAP [wikipedia.org] addresses. It's a whole different world actually.

      • A lot of our problems today would not be here if.
        OSI stack instead of TCP/IP.
        DCE & DFS instead of passwd/whatever + the bastard abomination which is NFS.

        Meh. People are lazy and cheap. Free with the network effect always wins. The Lowest Common Denominator. It's going to take another 15 years before we are near where we were 15 years ago. But this time it will be in Java!

        Did you ever use those things? I've never used the OSI stack (though I have had the misfortune of looking at some of the specs), but

        • Having never worked with the originals (Kerberos and Andrew File System), I don't know if this was a problem added in the "standardization" or if it came with the territory.

          I can't speak about historical implementations, but the current (and I assume most modern implementations elsewhere) implementations of Kerberos used by Microsoft and the FreeBSD project can be configured for a system with a 5 line config file that could be generated from the output of a hostname -f call if the client is otherwise configured properly (Has its domain name set properly). It does require a proper DNS setup which can be obnoxious if you try to configure it by hand, but there again, its an impl

    • Once upon a time it was an issue of cost of h/w logic

      No, it was an issue of the ISO specs being bloated and incomprehensible. The human cost had much more to do with their failure than the hardware cost.

  • Latency again (Score:4, Insightful)

    by Twinbee ( 767046 ) on Saturday February 26, 2011 @09:14AM (#35323258)

    I've seen it time and time again, people just generally don't care about latency, or even deny it exists in many cases (buffer bloat is certainly one cause of latency).

    Everything from changing channel on your TV remote, to a mobile phone number entry, to the frame delays you get from LCD monitors, to the soundcard delay, to the GUI widgets you click on;......... it's all over the place, and it can wreck the experience, or reduce it somewhat according to how big the delay is. Just because latency is harder to measure, that doesn't mean it isn't very important, especially when it builds up with lots of other 'tiny' delays to make one big delay.

    • by mtaht ( 603670 )
      latency issues were driving me insane upon my return to the USA. http://nex-6.taht.net/posts/Beating_the_speed_of_light_on_the_web/ [taht.net] the huge bandwidths advertised here, and the actual "speed" reminded me of a scene in the Marching Morons [wikipedia.org], where someone steps into their hot looking sportscar, smoke and sounds come out like he is doing 100 mph, the speedo says the same, and then he looks outside the car to see the countryside slooooowly going by. Many Americans have confused "Bandwidth", with "Speed". Ban
      • The bulk of Internet traffic in the US these days is streaming video. For that you need big bandwidth and big buffers, not low latency.

        That said, I wish we could settle on an Internet-wide QOS implementation and get both. Some packets have a legitimate need to cut in line. It would be workable if ISPs advertized both 'total' bandwidth and a smaller amount of 'turbo' bandwidth, or whatever stupid name they want to use for it, which is the fraction of your bandwidth that is not over-subscribed. By setti

        • Re:Latency again (Score:4, Insightful)

          by mtaht ( 603670 ) on Saturday February 26, 2011 @11:00AM (#35323796) Homepage
          "The bulk of Internet traffic in the US these days is streaming video. For that you need big bandwidth and big buffers, not low latency. " Emphatically not true. ESPECIALLY for streaming video, you need a functioning feedback mechanism (tcp acks or ECN or some other mechanism) to slow down the video periodically, so that it *doesn't* overflow what buffers you have, and catastrophically drop all the packets in the queue, resulting in stuttering video.
        • Re:Latency again (Score:4, Informative)

          by rcpitt ( 711863 ) on Saturday February 26, 2011 @01:11PM (#35324766) Homepage Journal
          I deal with streaming video daily - from a producer, distributor and support point of view.

          "Why does the web site load so slowly?" is the classic question - caused in many cases by the "eagleholic" having 4 live eagle nest video streams running in one window while trying to post observations and screencaps to the web site in another.

          Believe me - there is ample reason to deal with the problem as most of today's home networks are used for more than just one thing at a time. Mom is watching video, sis is uploading pictures of her party to Facebook, son is playing online games and dad is trying to listen to streaming audio - and NOTHING is working correctly despite the fact that this is a trivial load for even a T1 (1.45Mbps) let alone today's high-speed cable (30Mbps down and 5Mbps up). We used to run 30+ modems and web sites and email and all manner of stuff over bonded 56K ISDN lines for pity sake - and we got better latency than the links today.

          What's the problem? The latency for the "twitch" game packets has gone from 10ms to 4000ms or more - and the isochronos audio stream is jerky because it's bandwidth starved and the upload takes forever because the ACKs from FB can't get through the incoming video dump from YouTube (with its fast start window pushed from default 3 to 11 or 12) and by the time the video is half over, the link to YouTube has dropped because it took 30 seconds or more for the buffer to drain after the first push and the link had timed out.

          That's the problem - you need low latency for some things at the same time you need high throughput for others - and it is possible and can be done - and IS done if things are tuned correctly. But correctly tuning the use of buffers is an art today, not a science - and the ever-changing (by 3-4 orders of magnitude) needs of today's end-point routers has pushed the limits of what AQM (automated queue management) algorithms are currently available, even if they're turned on (which in most cases they're not it seems)

    • by PPH ( 736903 )
      Yeah. But if it messes with my first post status, I want it fixed!
      • by rcpitt ( 711863 )
        And that is exactly the problem with QOS that is under the control of someone who has a stake in the outcome.

        "I want everything louder than everything else" (Meat Loaf) epitomizes the net today - we have Google screwing with the fast start window and Microsoft pretty much ignoring it and setting it as large as possible in some cases (they do other things right though it seems)

        The buffer bloat problem is one born of history and ignorance:

        History - it used to be that we could not put enough buffer RAM i

    • by Idbar ( 1034346 )
      The problem is not just latency. It's latency AND packet losses, which can dramatically reduce the available capacity for TCP flows. Particularly, if the router is not well designed and there's no algorithm in place to counteract for a suboptimal design.

      A poorly designed router can sabotage the performance TCP, causing overall slowness to your connections. Particularly, those 10Mbps you're paying for and want them to properly work.
      • by rcpitt ( 711863 )
        "Any packet loss is bad" - that's the mantra I get from network engineers - and then the idiots don't turn on ECN (Explicit Congestion Notification) or run some bad-ass piece of crap that resets the ECN that is already on the packets they're transiting - or their routers don't respect the notifications or...

        (Reasonable) packet loss or ECN - pick one - and then tell your up and downstream neighbors why you picked it (hopefully ECN will find its way into near 100% deployment ASAP) and why they should respec

  • by RevWaldo ( 1186281 ) on Saturday February 26, 2011 @12:04PM (#35324246)
    Every packet is sacred.
    Every packet is great.
    If a packet is wasted,
    TCP gets quite irate.

    Let the heathen drop theirs
    When their RAM is spent.
    TCP shall make them pay for
    Each packet that can't be sent.

    Every packet is wanted.
    To this we are sworn.
    From real-time data from CERN
    To the filthiest of porn.

    Every packet is sacred.
    Every packet is great.
    If a packet is wasted,
    TCP gets quite irate.

    .
    • by mtaht ( 603670 )
      oh, that was a wonderful parody of an already wonderful parody. Thanks for that. I'd probably modify it a little bit for accuracy, if you'd let me paste a copy over to our humor page? http://www.bufferbloat.net/projects/bloat/wiki/Humor [bufferbloat.net] The bufferbloat problem is so big - in hundreds of millions of devices today and millions more in the pipeline, that if we didn't laugh sometimes, we'd explode.
    • by CAIMLAS ( 41445 )

      Best. Comment. Ever. That's going on my door at work - thanks.

  • Most delays are due to users connecting to their ADSL modem via Ethernet and not traffic managing properly.

    On a congested link this can cause large delays as Ethernet normally has a 1000 packet buffer in the Linux kernel and the ADSL modem has a similar buffer. You only need a couple of heavy connections which want to go faster than the ADSL will support and those buffers start to fill up real fast. You can easily end up with latencies measured in seconds if you have a lot of connections running (say bitto

  • For home users with a linux router set an HTB queue /w maximum egress rate to modem a little less than than your sustained upstream rate. At least this worked for me... never had problems with saturated upstream causing huge lags after doing this.

    After reading this guys buffer bloat rant I largly agree with him with some exceptions:

    1. What does multiple TCP sessions have to do with circumvention of congestion avoidance? TCP congestion avoidance needs to work with lots and lots of TCP sessions at once

    • by mtaht ( 603670 )

      Sort of in answer to both of your questions the bufferbloat.net servers are configured as follows:

      http://www.bufferbloat.net/projects/bloat/wiki/Dogfood_Principle [bufferbloat.net]

      trying at every point to make sure http 1.1 actually got used.

      We survived today's slashdotting. Handily.

      That said, your points are well made. SPDY is part of the chromium browser and looks to have some potential.

      In my case, I like the idea of smarter - and eventually sctp-enabled - proxies, especially on wireless hops. See thread at:

      https://lists.b [bufferbloat.net]

    • by jg ( 16880 )

      Re: 1. I've always thought that the congestion window to the same end-point should be shared: but that's not the way TCP implementations work, and wishing they worked that way won't make the problem go away. And, as I've shown, bufferbloat is not a TCP phenomena in any case.

      Re: 2. HTTP is a lousy protocol in and of itself, and having to do it on top of TCP makes it yet harder. It is the fact that HTTP is so ugly that makes so much else difficult. And I disagree with your claim that high latency links won't

  • Corrected Git URL (Score:4, Informative)

    by JamieF ( 16832 ) on Saturday February 26, 2011 @11:42PM (#35328518) Homepage

    The link in the /. story to debloat-testing should go here: git://git.infradead.org/debloat-testing.git [git].

    git:gitinfradeadorgdebloat-testinggit is not a valid URL.

We are Microsoft. Unix is irrelevant. Openness is futile. Prepare to be assimilated.

Working...