Intel Announces Open Fibre Channel Over Ethernet 107
sofar writes "Intel has just announced and released source code for their Open-FCoE project, which creates a transport allowing native Fibre Channel frames to travel over ordinary ethernet cables to any Linux system. This extremely interesting development will mean that data centers can lower costs and maintenance by reducing the amount of Fibre Channel equipment and cabling while still enjoying its benefits and performance. The new standard is backed by Cisco, Sun, IBM, EMC, Emulex, and a variety of others working in the storage field. The timing of this announcement comes as no surprise given the uptake of 10-Gb Ethernet in the data center."
I'm more interested in AoE (Score:0, Interesting)
High End customers will not go to this. (Score:3, Interesting)
Re:Speed? (Score:2, Interesting)
Mostly, I think this technology will compete against iSCSI, not dedicated fibre, with all the drawbacks -- plus an added drawback of currently being single-platform.
Re:High End customers will not go to this. (Score:5, Interesting)
I expect you're right, but it's interesting to note they're referring to this as Fibre Channel over Ethernet, and not over IP. The reduction in overhead there (not just packet size, but avoiding the whole IP stack) might be enough to really help; and if you're running separate 10 Gigabit Ethernet for the storage subsystem (i.e. not piggy backing on an existing IP network) it might be really nice. Or at least, comparable in performance and a heck of a lot cheaper.
On the other hand, really decent switches that can cope with heavy usage of 10-GigE without delaying packets at all aren't going to be massively cheap, and you'd need very high quality NICs in all the servers as well. Even then, fibre's still probably going to be faster than copper... but that's just something I made up. Maybe someone who knows more about the intricacies of transmitting data over each can enlighten us all?
There was recently an article about "storing" data within fibre as sound rather than converting it to for storage in electrical components, since the latter is kind of slow; how does this compare to transmission via current over copper?
Re:Speed? (Score:3, Interesting)
Re:Speed? (Score:3, Interesting)
My only point is that their are folks doing this and it tends to be the guys with large storage needs, moderate budgets, and a great deal of freedom from corporate standards and vendor influence.
Stay with them, these are good environments. BTW, I am not anti-standards, but at the end of the day they need to make sense. That is, not a standard for pure political posturing.
Re:High End customers will not go to this. (Score:2, Interesting)
Re:High End customers will not go to this. (Score:3, Interesting)
Basic setup is approximately this; CPU's for both servers and clients range between AMD XP 3500+ to AMD X2 4800+. Motherboards are Asus (Nvidia 550 and AMD690) cards, with 2-4GB memory plus an extra SATA card on the iSCSI servers, and extra rtl8168/9 gigabit cards (the forcedeth driver has some issues). Disks on the iSCSI servers are striped with LVM, but not to more than 3 spindles (I dont care that much about maxing speed, I just want close-to-local disk performance). Tuning's been mainly on the iSCSI side with InitialR2T and ImmediateData. I've played around with network buffers, but basically come to the conclusion that it's more efficient in my case to throw RAM on the problem.
The peak rates (90-97MB per sec) have been obtained on completely unloaded systems, and the standard 40-60MB/s read is with disk mirrored against both iSCSI systems (and then striped on the iSCSI systems).
"I have a buddy that has done the same but with NFS"
Ah. Ehm. NFS. Yes.
Well, to tell the truth, I started out this interesting journey towards a home SAN using diskless systems booted over PXE and mounting NFS roots (mainly to silence my mythtv systems, and simplify backups). Lets just say that after testing and switching the first system over to iSCSI I could barely believe my eyes.
NFS is much, much, much harder to get decent performance out of. No matter how much tuning I've done I've rarely managed to get more than 20-40MB/s peaks, and for many small file accesses the performance is horrible. I'd thought that was what I could expect out of a gigabit Lan, after all, NFS saturated a 100Mbps network. I was quite surprised when my first iSCSI tests got 60-70MB/sec.
If your friends setup is such that block devices would work instead of NFS (at least for some parts), I'd really suggest he try running an iSCSI target. I cant vouch for the Solaris version, but I know there is one and that it sounds similar to the Linux one. Linux ietd (iscsi enterprise target) has been very stable and highly performing for me (more than a year running it now with no lost data