Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Networking Software Linux

Intel Announces Open Fibre Channel Over Ethernet 107

sofar writes "Intel has just announced and released source code for their Open-FCoE project, which creates a transport allowing native Fibre Channel frames to travel over ordinary ethernet cables to any Linux system. This extremely interesting development will mean that data centers can lower costs and maintenance by reducing the amount of Fibre Channel equipment and cabling while still enjoying its benefits and performance. The new standard is backed by Cisco, Sun, IBM, EMC, Emulex, and a variety of others working in the storage field. The timing of this announcement comes as no surprise given the uptake of 10-Gb Ethernet in the data center."
This discussion has been archived. No new comments can be posted.

Intel Announces Open Fibre Channel Over Ethernet

Comments Filter:
  • by Anonymous Coward on Tuesday December 18, 2007 @09:14AM (#21737644)
    That's not Age of Empires, but ATA over Ethernet, a lightweight protocol which would be great for network booting Windows. Does anyone know of a free AoE initiator for Windows XP? The etherboot project already has AoE capability in its gPXE stack: http://www.etherboot.org/ [etherboot.org]
  • by BrianHursey ( 738430 ) on Tuesday December 18, 2007 @09:31AM (#21737744) Homepage Journal
    As we have seen with iSCSI the bandwidth capability over Ethernet just is not there. I with the EMC this will probably be great for the low end company that needs a mid tier and low tier environment. However large corporations with large database and high number of systems still need to stay with fibre frabrics. This probably will be only on the mid tier platforms like clariion.
  • Re:Speed? (Score:2, Interesting)

    by shaggy43 ( 21472 ) on Tuesday December 18, 2007 @09:44AM (#21737842)
    I have an account that I support that has completely saturated 4 4G ISL's in-between 2 Brocade 48k's, and had to re-balance their fibre. Granted, and individual HBA doesn't hit a sustained 2G/sec, but 16G/sec saturated to a pair of HDS Thunders is impressive.

    Mostly, I think this technology will compete against iSCSI, not dedicated fibre, with all the drawbacks -- plus an added drawback of currently being single-platform.
  • by totally bogus dude ( 1040246 ) on Tuesday December 18, 2007 @09:53AM (#21737904)

    I expect you're right, but it's interesting to note they're referring to this as Fibre Channel over Ethernet, and not over IP. The reduction in overhead there (not just packet size, but avoiding the whole IP stack) might be enough to really help; and if you're running separate 10 Gigabit Ethernet for the storage subsystem (i.e. not piggy backing on an existing IP network) it might be really nice. Or at least, comparable in performance and a heck of a lot cheaper.

    On the other hand, really decent switches that can cope with heavy usage of 10-GigE without delaying packets at all aren't going to be massively cheap, and you'd need very high quality NICs in all the servers as well. Even then, fibre's still probably going to be faster than copper... but that's just something I made up. Maybe someone who knows more about the intricacies of transmitting data over each can enlighten us all?

    There was recently an article about "storing" data within fibre as sound rather than converting it to for storage in electrical components, since the latter is kind of slow; how does this compare to transmission via current over copper?

  • Re:Speed? (Score:3, Interesting)

    by jsailor ( 255868 ) on Tuesday December 18, 2007 @10:49AM (#21738494)
    For that type of project, look to the hedge fund community. I know of 2 hedge funds that have built their own storage systems that way - Ethernet, Linux, direct attached disk, and a lot of custom code. My world doesn't allow me to get into the details, so I can't elaborate. My only point is that their are folks doing this and it tends to be the guys with large storage needs, moderate budgets, and a great deal of freedom from corporate standards and vendor influence.
  • Re:Speed? (Score:3, Interesting)

    by canuck57 ( 662392 ) on Tuesday December 18, 2007 @10:59AM (#21738620)

    My only point is that their are folks doing this and it tends to be the guys with large storage needs, moderate budgets, and a great deal of freedom from corporate standards and vendor influence.

    Stay with them, these are good environments. BTW, I am not anti-standards, but at the end of the day they need to make sense. That is, not a standard for pure political posturing.

  • by myz24 ( 256948 ) on Tuesday December 18, 2007 @03:09PM (#21742018) Homepage Journal
    I think you could follow up with some info about your setup. I mean, there is no way you're getting those speeds without tuning some network parameters or with some serious CPU and RAID setup. It's not that I don't believe you, I have a buddy that has done the same but with NFS but he's using an opensolaris system with TCP offloading cards and a heck of a RAID array.
  • by Znork ( 31774 ) on Tuesday December 18, 2007 @04:10PM (#21743022)
    "those speeds without tuning some network parameters or with some serious CPU and RAID setup."

    Basic setup is approximately this; CPU's for both servers and clients range between AMD XP 3500+ to AMD X2 4800+. Motherboards are Asus (Nvidia 550 and AMD690) cards, with 2-4GB memory plus an extra SATA card on the iSCSI servers, and extra rtl8168/9 gigabit cards (the forcedeth driver has some issues). Disks on the iSCSI servers are striped with LVM, but not to more than 3 spindles (I dont care that much about maxing speed, I just want close-to-local disk performance). Tuning's been mainly on the iSCSI side with InitialR2T and ImmediateData. I've played around with network buffers, but basically come to the conclusion that it's more efficient in my case to throw RAM on the problem.

    The peak rates (90-97MB per sec) have been obtained on completely unloaded systems, and the standard 40-60MB/s read is with disk mirrored against both iSCSI systems (and then striped on the iSCSI systems).

    "I have a buddy that has done the same but with NFS"

    Ah. Ehm. NFS. Yes.

    Well, to tell the truth, I started out this interesting journey towards a home SAN using diskless systems booted over PXE and mounting NFS roots (mainly to silence my mythtv systems, and simplify backups). Lets just say that after testing and switching the first system over to iSCSI I could barely believe my eyes.

    NFS is much, much, much harder to get decent performance out of. No matter how much tuning I've done I've rarely managed to get more than 20-40MB/s peaks, and for many small file accesses the performance is horrible. I'd thought that was what I could expect out of a gigabit Lan, after all, NFS saturated a 100Mbps network. I was quite surprised when my first iSCSI tests got 60-70MB/sec.

    If your friends setup is such that block devices would work instead of NFS (at least for some parts), I'd really suggest he try running an iSCSI target. I cant vouch for the Solaris version, but I know there is one and that it sounds similar to the Linux one. Linux ietd (iscsi enterprise target) has been very stable and highly performing for me (more than a year running it now with no lost data :) ). It lacks some features like some forms of SCSI-3 reservation, but I can live with that.

Intel CPUs are not defective, they just act that way. -- Henry Spencer

Working...