Pushing the Limits of Network Traffic With Open Source (cloudflare.com) 55
An anonymous reader writes: CloudFlare's content delivery network relies on their ability to shuffle data around. As they've scaled up, they've run into some interesting technical limits on how fast they can manage this. Last month they explained how the unmodified Linux kernel can only handle about 1 million packets per second, when easily-available NICs can manage 10 times that. So, they did what you're supposed to do when you encounter a problem with open source software: they developed a patch for the Netmap project to increase throughput. "Usually, when a network card goes into the Netmap mode, all the RX queues get disconnected from the kernel and are available to the Netmap applications. We don't want that. We want to keep most of the RX queues back in the kernel mode, and enable Netmap mode only on selected RX queues. We call this functionality: 'single RX queue mode.'" With their changes, Netmap was able to receive about 5.8 million packets per second. Their patch is currently awaiting review.
Re: (Score:3, Informative)
If I have a 100Mb/s NIC, I'm only getting 10 MB/s on Linux? I doubt that.
Packets != Bytes
Maxxing the NIC card (Score:2)
From TFA:
... which implies that NICs can easily manage more than 10 million packets per second, right?
5.8 million packets per second might be fast, but it is still _ much lower _ than the theoretical >10 million packets per second max speed ...
I am curious, has any software (no matter if it's open source, or proprietary) successfully achieved the >10 million packets per second threshold yet?
Re: (Score:2)
10000000000/(64*8) = 19,531,250
That's a maximum of 19m packets/second - assuming every frame is the minimum size, and you've a ten-gigabit ethernet interface.
So 10mp/s isn't realistic in most situations, but it is possible - and you might hit it if you're trying to monitor traffic on a major backbone link, which is exactly the sort of thing netmap may be used for.
Re:What does this mean? (Score:5, Informative)
A packet is not a byte. A packet is a sequence of bits including a address, other header information and the actual payload.
IPv4 packet will as example have 20 bytes(160 bits) header and a maximum payload of 65,515 bytes(though often lower in practice)
If you were to send a lot of packets with only a single byte payload then each packet will be 168 bits and your 100 Mb/s will result in about 600 000 packets. But at a gigabit connection the actual limit will start to hit for such strange traffic.
Note that normally you would send more than a single byte of information/packet so in most real applications you would need much higher speeds to hit the limit. At 105 bytes of information you would have a total length of 1000, bits so would be at about the limit on gigabit hardware. But still most high bandwidth traffic tends to have much more information in each packet and thus not usually hit such limits.
The limit has really started to hit due to the high availability of 10 gigabit and faster network cards coming down in price.
Re: (Score:1)
The limit has really started to hit due to the high availability of 10 gigabit and faster network cards coming down in price.
10G coming down in price? We're starting to get 40G pretty commonly now. I think the cards can be had for around 1k or less. Hell I wouldn't be surprised if we start seeing some 100G or a weird intermediate speed. With RoCE Ethernet is starting to be a viable alternative for MPI/RDMA instead of Infiniband and 40G switchports are amazingly cheap and can be broken into 4x10G interfaces to support legacy 10G server
Re: (Score:2)
If you think that RoCE is a viable alternative for Infiniband/MPI then you have been on the crack pipe again.
Sure you might be able to replace QDR Infiniband with RoCE, but the Infiniband world has moved on, and replacing EDR with RoCE is a sick joke. While your RoCE gets maybe 1.5us latency which is in the ball park of QDR at 1.2us, by EDR is doing 0.5us latency, and Infiniband is about latency as much as throughput. In addition EDR Infiniband is a lot cheaper than RoCE at 100Gbps.
Re: (Score:2)
your question is in the area of: "you claim the sun is bright. show me your sources!!!"
his claim is as common a knowledge as water being wet, earth being round, etc..
http://bsd.slashdot.org/story/... [slashdot.org]
Re: (Score:1)
your question is in the area of: "you claim the sun is bright. show me your sources!!!"
No, his question is in the area of "you claim the sun illuminates this patch of ground better than this 12kW arc lamp. show me your sources!!!" Sure, the sun is bright. So is a 12kW arc lamp. Making the claim that one illuminates an area better than another requires supporting evidence in the form of luminous intensity measurements.
Since you put it another way, I will too:
"Water is wet." Sure, but is water wetter than alcohol? Ferrocene? Sodium laureth sulfate? His claim is that it's the best, whi
Re: (Score:3)
Sounds about right. Even if you were to ignore TCP/IP overhead, the most you could hope to achieve is 12.5MB/s over a 100Mb/s link. But that has nothing to do with what this article is about.
Re: (Score:3)
The general rule is divide-by-ten: 100Mb link means 10MB throughput. Over eight for the bit-to-byte convertion, but over ten to allow for overhead. It's not exact, but it's a good rule of thumb.
Re: (Score:1)
I always thought the best was 12.5MB/s, and that it didn't matter what system you're using.
But on the average, after all the other losses, 100Mbits/s is more or less about 10MBytes/s. At least that is what shows up on the display.
Re: (Score:2)
I've never seen better than about 8.5 MB/s sustained at any place I've worked.
Re: (Score:2)
This patch and its effects (Score:2, Insightful)
must be thoroughly considered. CloudFlare is the greatest Man-in-the-Middle on the Internet, and don't think for a second they're not collaborating with U.S agencies who wants to get at sensitive data going through their systems.
Re: (Score:1)
"Prince and his team were inspired to start the company after a call from the Department of Homeland Security."(quote from article, not my opinion)
http://exiledonline.com/isucker-big-brother-internet-culture/
Interesting take on it?
This is what routers and switches are for (Score:4, Interesting)
Re: (Score:3)
Their goal is to receive the packets into their own user space analysis software and drop most of them (as being a flood attack).
Their problem is that, using the existing methods, they can't get more than ~1 M packets/s into their software.
I guess they are not using dedicated router hardware because there's no way to run their software on it.
At which point, maybe they need a piece of kit based on Cavium's chips (lots of of low performance cores).
Re:This is what routers and switches are for (Score:5, Interesting)
I work at Cavium on the SDK team (I do all the bootloader stuff for their MIPS chips). The Ubiquiti Edgerouter Lite uses one of our old (2nd gen CN5020) low-end dual core chips and is able to handle 1M packets/second by running the packet processing on a dedicated core and Linux on the other core. Our current generation (4th gen) is far faster. I work with chips from 4 up to 48x2 cores (48 cores, 2 chips running in NUMA). There's a lot of support for offloading packet processing in our chips, for example, directing packet flows to different groups of CPU cores. There's also various engines built-in to the chips for things like compression, pattern matching, deep packet inspection, encryption, RAID calculations and more. We also are selling NIC cards (Liquid I/O) which can run Linux on the NIC card as well as dedicated software that can offload a lot. For example, it can perform all the SSL, VPN and firewall stuff on the NIC. I'm working on some of the new ones now. I'd love to see some inexpensive eval boards available, especially with our CN73XX or even CN70xx chip. Even our low-end quad core CN71xx can handle 10Gbps of traffic.
Re: (Score:1)
Re: (Score:2)
Look up the 410Nv [cavium.com]. It has 4 SFP+ ports on it.
Re: (Score:2)
Re:This is what routers and switches are for (Score:5, Funny)
Re: (Score:2)
Re: (Score:2)
Higher-end routers have hardware dedicated to doing things like deep packet inspection and modification with less software overhead. For example, I work at Cavium and the CPUs I work with have a lot of dedicated packet processing hardware designed to offload much of that processing to the hardware which has many dedicated engines.
SystemD (Score:3, Funny)
Wouldn't it just be easier to put this in systemd?
My company addresses this (Score:5, Interesting)
My employer deals with this on their multi-core MIPS processors. What we do is we can run Linux on one set of cores and dedicated applications on other cores. These applications offload most of the TCP/IP stack and only pass the relevant traffic to the kernel. The Ubiquiti EdgeRouter Lite uses one of our lowest-end chips and handles 1M packets/second. Our higher-end chips can easily handle far more packets. Then again, the dedicated cores are also able to take much better advantage of the hardware offload support for forwarding and filtering. Even without using the dedicated special application we can handle 40Gbps or more of traffic on the high-end chips. We can also handle stuff like IPSec at these rates due to built-in encryption and hashing instructions if coded properly.
Having the right NIC card can also help since some NIC cards can offload things like TCP/IP segmentation and reassembly. I've also dealt with small gigabit switch chips that can offload stuff like NAT but Linux can't really take advantage of that as-is.
There's a lot of room for improvement. Some years ago I was doing performance analysis for Atheros with respect to CPU cache utilization. The biggest bottleneck was the fact that the transmit path in the Linux networking stack would only pass a single packet at a time. Batch processing of packets for WiFi makes a HUGE difference since groups of packets need to be aggregated for 802.11N. It also would allow for more efficient packet processing for non-wireless as well. There are a lot of other areas that also could be improved.
Re: (Score:2)
Re: (Score:2)
You can see it clearly if you take a managed switch apart. There are usually two large chips. A very big one that connects to all the interfaces and does the actual switching logic with specialised silicon, and a much smaller x86 or ARM processor that runs the management software.
Re: (Score:2)
Re: (Score:2)