Intel Announces Open Fibre Channel Over Ethernet 107
sofar writes "Intel has just announced and released source code for their Open-FCoE project, which creates a transport allowing native Fibre Channel frames to travel over ordinary ethernet cables to any Linux system. This extremely interesting development will mean that data centers can lower costs and maintenance by reducing the amount of Fibre Channel equipment and cabling while still enjoying its benefits and performance. The new standard is backed by Cisco, Sun, IBM, EMC, Emulex, and a variety of others working in the storage field. The timing of this announcement comes as no surprise given the uptake of 10-Gb Ethernet in the data center."
Fiber channel (Score:5, Funny)
In ye olde patch panel
Beats fiber thin
On your chinny-chin-chin
Burma Shave
Need target (Score:1)
Re:Bumper cars. (Score:5, Funny)
eth0 Link encap:Ethernet HWaddr 00:00:0D:03:01:04
inet addr:192.168.1.100 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::000:00f0:0043:0084/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1781638 errors:0 dropped:0 overruns:0 frame:0
TX packets:1651683 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:803882935 (766.6 MiB) TX bytes:333706343 (318.2 MiB)
Interrupt:18 Base address:0xd800
(address details fudged only)
Re: (Score:3, Insightful)
FCOE really does rely on "new fangled technology". More than switched ethernet is required, it has to be an enhanced Ethernet that prevents virtually all congestion related drops.
Work on such features is indeed in progress in both IEEE 802.1 and the IETF. The comparison of FCOE vs. iSCSI in those environments will be a lot more even than the comparisons presented by FCOE champions currently. Those compare storage traffic that requires neither routing or security, and tests FCOE over forthcoming Ethernet
Re: (Score:2, Funny)
Re: (Score:2)
Re: (Score:1)
Here is the truth:
If you carved the words "ethernet" on a stick and then smeared shit over it, people would stand in line to buy it.
Re: (Score:1)
Re:I'm more interested in AoE (Score:5, Funny)
Re: (Score:1, Funny)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2, Funny)
"You want to put... a demon? On our server?"
"Daemon, it's a daemon."
"..."
Re: (Score:2)
There is something about existing names that seem to already be looked down on that would probably cause them to over look that possibility for a name.
Re: (Score:1)
Re: (Score:3, Insightful)
ATA, SATA and SAS all have severe connectivity limits. They don't have a way of addressing a large number of devices, running long distances or supporting multiple initiators. While they might be fine for your home they are worthless for the SAN/LAN environment where fibre channel and FCoE are targeted.
Speed? (Score:5, Informative)
Re: (Score:3, Informative)
Re: (Score:3, Informative)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2, Interesting)
Mostly, I think this technology will compete against iSCSI, not dedicated fibre, with all the drawbacks -- plus an added drawback of currently being single-platform.
Re: (Score:2)
Re:Speed? (Score:5, Informative)
Re:Speed? (Score:4, Informative)
Re: (Score:1)
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
You mean, 8Gb FC will be out long before 100 Gb ethernet becomes reasonably priced.
10 Gb ethernet is already reasonably priced (compared to FC).
Re: (Score:1)
FCoE is about making Ethernet more like Fibre Channel.
Re: (Score:1)
"10 GbE does not equal 10 Gb FCoE."
Re: (Score:2)
It does when you are doing FCoE in software, which is what this thread is about. Sure, the vendors would like to sell you specialized FCoE cards which will end up costing the same as an Ethernet NIC and an FC HBA put together, but you don't have to buy them.
Re: (Score:2)
FCoE is NOT a FC replacement. (Score:2)
You do realize that 10Gb FC is also available, and netapp has a conflict of interest? FCoE isn't going to do jack for netapp's NAS equipment.
I imagine if the processing overhead isn't too high or offload cards become available then this would be significantly faster than 4Gb FC
It won't have FC's other performance characteristics, and that's a lot of expensive ifs before even getting close.
if you can stand the latency of packing two or more FC frames into an ethernet jumbo frames.
If you could stand the latency, then why on Earth would you be using FC to begin with?
FCoE isn't going to replace FC where FC is needed. It will only make c
Re: (Score:2)
FCoE s
Re: (Score:1)
As far as I can see this is a way of bridging fibre channels over Ethernet. This does not necessarily mean that you will get fibre-like speed (throughput or latency). I am sure that this will have some use, but it does not mean that high performance data-centres will just be able to use Ethernet instead of fibre.
To me, fibre channel SAN solutions are oversold. It raises the cost per GB/TB much higher than if you just put all the drives in a system right off that it needs. Direct attached storage (no swi
Re: (Score:3, Interesting)
Re: (Score:3, Interesting)
My only point is that their are folks doing this and it tends to be the guys with large storage needs, moderate budgets, and a great deal of freedom from corporate standards and vendor influence.
Stay with them, these are good environments. BTW, I am not anti-standards, but at the end of the day they need to make sense. That is, not a standard for pure political posturing.
Re: (Score:2)
Easier and less expensive to manage and less to go wrong.
When "less" becomes a single point of failure you have problems. In this day and age you have to assume t
Re: (Score:2)
Whatever happened to ATAoE? Wasn't that supposed to be the cheap equivalent to iSCSI / Fibre Channel?
More to the point, how difficult and expensive would it be to build a chip to interface between FCoE and a SATA drive?
I'm still hoping for a cheap consumer solution for attaching drives directly to the network.
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
10GE is a heck of a lot cheaper (Score:5, Informative)
Re: (Score:2)
1) Why don't you just direct connect since you only have 3 HBAs?
2) At least compare it to a 9120 or 9124 (which has 8-port licenses). Anyone knows that a 9140 (40 ports) and a 9506 (a director with 4 FC card slots) is way overkill for what you describe.
I'd say that at the very least, you're misinformed as to wh
Re: (Score:2)
Re: (Score:2)
A couple of things I've learned:
If your company is buying a director and the main goal doesn't coincide with either uptime/high availability or port density, then all your company is doing is making the switch vendor's stock price go up. And, if you don't want the lowest latency, don't buy fibre channel. Always buy what fits. It does everyone a favor in the long
High End customers will not go to this. (Score:3, Interesting)
Re:High End customers will not go to this. (Score:5, Interesting)
I expect you're right, but it's interesting to note they're referring to this as Fibre Channel over Ethernet, and not over IP. The reduction in overhead there (not just packet size, but avoiding the whole IP stack) might be enough to really help; and if you're running separate 10 Gigabit Ethernet for the storage subsystem (i.e. not piggy backing on an existing IP network) it might be really nice. Or at least, comparable in performance and a heck of a lot cheaper.
On the other hand, really decent switches that can cope with heavy usage of 10-GigE without delaying packets at all aren't going to be massively cheap, and you'd need very high quality NICs in all the servers as well. Even then, fibre's still probably going to be faster than copper... but that's just something I made up. Maybe someone who knows more about the intricacies of transmitting data over each can enlighten us all?
There was recently an article about "storing" data within fibre as sound rather than converting it to for storage in electrical components, since the latter is kind of slow; how does this compare to transmission via current over copper?
Re:High End customers will not go to this. (Score:5, Informative)
Re: (Score:1)
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
Granted, you do lose some placement flexibility, which might be a deal-breaker in so
Re: (Score:1)
Re: (Score:2)
(1) Procurve gigE switch w/Jumbo frames turned on
(many) SAS drives
and we can, in production, have 4Gb throughput on our iSCSI SAN.
Tell me again where this "throughput" is hiding?
Regards,
Re:High End customers will not go to this. (Score:4, Insightful)
The bandwidth is there. I can get 960 Mb/s sustained application-layer throughput out of a gigabit ethernet connection. When you have pause frame support and managed layer 3 switches, you can strip away the protocol overhead of iSCSI, and keep the reliability and flexibility in a typical data center.
The goal of this project is not to replace fibre channel fabrics, but rather to extend them. For every large database server at your High End customer, there are dozens of smaller boxes that would greatly benefit from centralized disk storage, but for which the cost of conventional FC would negate the benefit. As you've noted, iSCSI isn't always a suitable option.
You're probably right that people won't use this a whole lot to connect to super-high-end disk arrays, but once you hook up an FCoE bridge to your network, you have the flexibility to do whatever you want with it. In some cases, the cost benefit of 10Gb ethernet vs. 2x 4Gb FC alone will be enough motivation to use it even for very high-end work.
Re: (Score:1)
Re: (Score:1)
Re: (Score:2)
For what it's worth, the NFS server in my testing was using Fibre Channel storage.
Re: (Score:2)
So I'd really have to wonder what anyone failing to get that is running. I hope they're not paying for it.
Sure, non-cached performance against the IDE
Re: (Score:2, Interesting)
Re: (Score:3, Interesting)
Basic setup is approximately this; CPU's for both servers and clients range between AMD XP 3500+ to AMD X2 4800+. Motherboards are Asus (Nvidia 550 and AMD690) cards, with 2-4GB memory plus an extra SATA card on the iSCSI servers, and extra rtl8168/9 gigabit cards (the forcedeth driver has some issues). Disks on the iSCSI servers are striped with LVM, but not to more than 3 spindles (I dont care that much about max
Re: (Score:2)
Fiber Channel will be dead in less t
Re: (Score:2)
Re: (Score:2)
Srsly, FC or iSCSI? (Score:2)
Can someone elaborate?
Re: (Score:2)
iSCSI is for implementing a "direct attached storage device" using an IP network (Internet/internet/intranet) as the backbone.
FCoE does not involve IP and is simply a lower cost, possibly better (time will tell), way of replacing optical fabric in data centers.
Re: (Score:2)
Re: (Score:2)
I'm not a datacenter kind of guy, so help me out. If you've got 10 G Ethernet, then why would you want to run FC rather than iSCSI?
I'm not a datacenter guy either, but I am a programmer.
My guess is simply just avoiding the IP stack. I'd guess an IP stack would add some latency, definitely adds some overhead, and most implementations are unlikely to be well optimized for extremely high bandwidth links (10 Gbit/sec).
FCoE avoids the IP stack entirely. If done properly, it can avoid all of the above problems.
Re: (Score:1)
Re: (Score:2)
Re: (Score:1)
Re: (Score:2, Insightful)
1) TCP/IP doesn't guarantee in-order delivery of packets (think of stuttering with streaming media, etc...)
2) Frame sizes are smaller and have more overhead than Fibre Channel packets.
3) Most NICs rely on the system to encapsulate & process packets - a smart NIC [TCP Ofload Engine card] costs almost as much as a Fibre Channel card.
Re: (Score:3, Informative)
Re: (Score:2)
I think that's called a karma bonus modifier; my post wasn't moderated either way.
We're both right. It just depends on what is meant by "delivery". Delivery is out of order on the wire, and that's what I (and I suspect the original poster) meant. Of course the protocol stack guarantees in-order delivery to the higher layers. That means that the slowest packet is going to dictate the latency of an entire transfer,
Re: (Score:2)
Yer olde FC product salesman has a much better commission, as FC products have far, far higher margins than ethernet products. Therefore the FC saleman buys you better lunches and invites you to seminars with more free booze, while displaying his company produced graphs over how cutting edge lab FC hardware vastly outperforms iSCSI served by a PC from last century.
In your booze addled state you find this reasonable, and refrain from using google or performing actual tes
Too late: I'm already using AoE (Score:2)
http://en.wikipedia.org/wiki/ATA_over_Ethernet [wikipedia.org]
And combine it with Xen or other virtualization technology and you have a really slick setup:
http://xenaoe.org/ [xenaoe.org]
Too late (Score:2)
And the summary is incorrect in saying Intel has just announced.
Looks like either the
doh... bad
Please do not be osnews, atleast check your articles for chirst's sake.
Re: (Score:1)
Its that first choice-- slashdot editors are lousy buffons.