Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Intel Networking Software Linux

Intel Announces Open Fibre Channel Over Ethernet 107

sofar writes "Intel has just announced and released source code for their Open-FCoE project, which creates a transport allowing native Fibre Channel frames to travel over ordinary ethernet cables to any Linux system. This extremely interesting development will mean that data centers can lower costs and maintenance by reducing the amount of Fibre Channel equipment and cabling while still enjoying its benefits and performance. The new standard is backed by Cisco, Sun, IBM, EMC, Emulex, and a variety of others working in the storage field. The timing of this announcement comes as no surprise given the uptake of 10-Gb Ethernet in the data center."
This discussion has been archived. No new comments can be posted.

Intel Announces Open Fibre Channel Over Ethernet

Comments Filter:
  • by smittyoneeach ( 243267 ) * on Tuesday December 18, 2007 @08:13AM (#21737630) Homepage Journal
    Fiber channel
    In ye olde patch panel
    Beats fiber thin
    On your chinny-chin-chin
    Burma Shave
  • This sounds quite cool, but I don't have any FC storage arrays or the "Fibre Channel Forwarder" they mention, so I would have to wait until they have the target written before being able to try it out.
  • Speed? (Score:5, Informative)

    by Chrisq ( 894406 ) on Tuesday December 18, 2007 @08:16AM (#21737648)
    As far as I can see this is a way of bridging fibre channels over Ethernet. This does not necessarily mean that you will get fibre-like speed (throughput or latency). I am sure that this will have some use, but it does not mean that high performance data-centres will just be able to use Ethernet instead of fibre.
    • Re: (Score:3, Informative)

      by farkus888 ( 1103903 )
      I am not too sure about the latency, but I don't know of any storage solution that can saturate 10 Gb sustained speeds. except maybe something like gigabytes array of ram as a hd system. simply reducing the number of drives on a daisy chain should keep you running happily as far as throughput goes.
      • Re: (Score:3, Informative)

        by afidel ( 530433 )
        Huh? Our piddly 150 spindle SAN could keep a 10Gb link saturated no problem if we had a workload that needed it. In fact that's less than 7MB/s per drive, about one tenth of what these drives are capable of for bulk reads or about one fifth for bulk writes. Even for totally random I/O 150 spindles could probably keep that sized pipe filled.
        • I didn't think anyone intended this for the main pipes of a SAN. I thought it was for bringing those individual spindles back the the FC switch. then using fiber for the big links. though to be honest my understanding of FC SANs is poor, I am honestly posting more to get someone to explain it to me because I said something wrong than to try to enlighten others.
          • by afidel ( 530433 )
            Nah I see this as a lower cost way to distribute from the FC switch back to the storage users eg the servers. Most storage is also presented by some kind of storage array, very very little is JBOD presented directly by a switch. This is mostly due to the lack of management of JBOD as well as the fact that the performance improvement of placing a bunch of intelligent cache in front of the storage pool is huge.
          • by Intron ( 870560 )
            No big company wants to maintain two separate optical fiber networks, so you either get ethernet or fibre channel for your long runs. Since it's inception, you have been able to run ethernet over fibre channel, but almost nobody uses expensive (FC) technology when they can use cheap technology (ethernet). Alternatively, you can run iSCSI which is serial SCSI over ethernet. FCoE lets you bridge two remote fibre channel SANs or connect to remote fibre channel storage using ethernet without having to conver
      • Re: (Score:2, Interesting)

        by shaggy43 ( 21472 )
        I have an account that I support that has completely saturated 4 4G ISL's in-between 2 Brocade 48k's, and had to re-balance their fibre. Granted, and individual HBA doesn't hit a sustained 2G/sec, but 16G/sec saturated to a pair of HDS Thunders is impressive.

        Mostly, I think this technology will compete against iSCSI, not dedicated fibre, with all the drawbacks -- plus an added drawback of currently being single-platform.
        • Single-platform? OpenFCoE may be a Linux software initiative, but I think the T11 FCoE standard [wikipedia.org] on which it is based is being developed for all modern major server platforms.
    • Re:Speed? (Score:5, Informative)

      by afidel ( 530433 ) on Tuesday December 18, 2007 @08:35AM (#21737780)
      According to this netapp paper [netapp.com] even NFS over 10GbE is lower latency than 4Gb FC. I imagine if the processing overhead isn't too high or offload cards become available then this would be significantly faster than 4Gb FC. As far as bandwidth 10>4 even with the overhead of ethernet framing, especially if you can stand the latency of packing two or more FC frames into an ethernet jumbo frames.
      • Re:Speed? (Score:4, Informative)

        by Intron ( 870560 ) on Tuesday December 18, 2007 @11:04AM (#21739406)
        Umm. The paper says that the test load was sequential reads entirely cached in memory, so not exactly an unbiased test.
        • Why of course, if you wanted to test the speed of the fabric rather than the speed of the raid array wouldn't you prefer to read from cache rather than disk?
      • Just thought I'd point out that NFS (NAS) is NetApp's bread and butter. They've been saying NFS is as good as block storage over Fibre Channel forever, and not everyone agrees. Their claim may or may not be true, but this coming from NetApp should be scrutinized in the same way as a statement from Microsoft saying how much lower their TCO is compared to Linux. Storage vendors are well skilled at spin.
        • 10 years ago, NetApp put out white papers saying that they could make Oracle run over NFS. Could you? Sure. Would you, if you wanted to keep your job? No.
      • 8Gb FC will be out long before 10Gb ethernet becomes reasonably priced.
        • by Znork ( 31774 )
          "8Gb FC will be out long before 10Gb ethernet becomes reasonably priced."

          You mean, 8Gb FC will be out long before 100 Gb ethernet becomes reasonably priced.

          10 Gb ethernet is already reasonably priced (compared to FC).
          • 10 GbE 10 Gb FCoE.

            FCoE is about making Ethernet more like Fibre Channel.
            • That should read:

              "10 GbE does not equal 10 Gb FCoE."
              • "10 GbE does not equal 10 Gb FCoE."

                It does when you are doing FCoE in software, which is what this thread is about. Sure, the vendors would like to sell you specialized FCoE cards which will end up costing the same as an Ethernet NIC and an FC HBA put together, but you don't have to buy them.
        • by afidel ( 530433 )
          10GbE is reasonable today. Quadrics has a 96 port switch for only $300 per port and adapters are only $1K (eg NC510C from HP or the PXLA8591CX4 from Intel). Sure you can get 2Gb FC for around this same price point but 4Gb is significantly more. Brocade wants $500 per port with no SFP's and only 32 of the 64 ports enabled for the Silkwork 4900, fully configured you're at greater than $1,500 per port. Qlogic is similar for the SANbox 9000.
      • What do you mean "10GbE is lower latency than 4Gb FC"? That's a bit apples-to-oranges, isn't it?
        You do realize that 10Gb FC is also available, and netapp has a conflict of interest? FCoE isn't going to do jack for netapp's NAS equipment.

        I imagine if the processing overhead isn't too high or offload cards become available then this would be significantly faster than 4Gb FC

        It won't have FC's other performance characteristics, and that's a lot of expensive ifs before even getting close.

        if you can stand the latency of packing two or more FC frames into an ethernet jumbo frames.

        If you could stand the latency, then why on Earth would you be using FC to begin with?

        FCoE isn't going to replace FC where FC is needed. It will only make c

    • Why not? Switched fabric topology has no inherent latency benefit over star topology, and the majority of servers in a data center aren't doing anything that need any more sophisticated throughput aggregation than 802.3ad (LACP) bonding will give you. As long as you have pause frame support (a prerequisite for this FCoE implementation) you can create a lossless ethernet network, which eliminates the need for much of the protocol overhead of something like iSCSI, as long as you're staying on the LAN.

      FCoE s
    • As far as I can see this is a way of bridging fibre channels over Ethernet. This does not necessarily mean that you will get fibre-like speed (throughput or latency). I am sure that this will have some use, but it does not mean that high performance data-centres will just be able to use Ethernet instead of fibre.

      To me, fibre channel SAN solutions are oversold. It raises the cost per GB/TB much higher than if you just put all the drives in a system right off that it needs. Direct attached storage (no swi

      • Re: (Score:3, Interesting)

        by jsailor ( 255868 )
        For that type of project, look to the hedge fund community. I know of 2 hedge funds that have built their own storage systems that way - Ethernet, Linux, direct attached disk, and a lot of custom code. My world doesn't allow me to get into the details, so I can't elaborate. My only point is that their are folks doing this and it tends to be the guys with large storage needs, moderate budgets, and a great deal of freedom from corporate standards and vendor influence.
        • Re: (Score:3, Interesting)

          by canuck57 ( 662392 )

          My only point is that their are folks doing this and it tends to be the guys with large storage needs, moderate budgets, and a great deal of freedom from corporate standards and vendor influence.

          Stay with them, these are good environments. BTW, I am not anti-standards, but at the end of the day they need to make sense. That is, not a standard for pure political posturing.

      • by dave562 ( 969951 )
        I might have missed it in the logic you presented, but where do you account for hardware failure? If your box with the direct attached storage goes down then all access to that data ceases. If you're running a SAN with the app servers clustered, when you lose one of the boxes the cluster fails over and your users still have access to the data.

        Easier and less expensive to manage and less to go wrong.

        When "less" becomes a single point of failure you have problems. In this day and age you have to assume t

    • Whatever happened to ATAoE? Wasn't that supposed to be the cheap equivalent to iSCSI / Fibre Channel?

      More to the point, how difficult and expensive would it be to build a chip to interface between FCoE and a SATA drive?

      I'm still hoping for a cheap consumer solution for attaching drives directly to the network.

      • by jabuzz ( 182671 )
        Want to represent your tape drive using AoE? Sorry you are out of luck. FCoE offers all the benefits of AoE (i.e. using cheap Ethernet, with no TCP/IP overhead) but the flexibity to do stuff other than SATA drives.
      • by jcnnghm ( 538570 )
        We have a SR1521 [coraid.com], and it seems to do its job pretty well. It provides lots (over 7TB) of cheap storage space to the network. It probably isn't as fast as some other solutions, but our application doesn't need it to be.
      • The problem with those protocols (ATAoE, etc) is using ethernet it's only a broadcast domain - no segmentation or isolation (i.e. any host could connect to any storage). FC Protocol [wikipedia.org] is similar to TCP/IP but much more efficient and better suited to storage.
  • by afidel ( 530433 ) on Tuesday December 18, 2007 @08:20AM (#21737690)
    As long as a server is within the distance limit of copper, 10GE is about 3-4x cheaper then even 2Gb FC. We've also had a heck of a lot more stability out of our 6500 series switches then we have out of our 9140's and the 9500's are extremely expensive if you have a need for under 3 cards worth of ports.
    • We've also had a heck of a lot more stability out of our 6500 series switches then we have out of our 9140's and the 9500's are extremely expensive if you have a need for under 3 cards worth of ports.

      1) Why don't you just direct connect since you only have 3 HBAs?

      2) At least compare it to a 9120 or 9124 (which has 8-port licenses). Anyone knows that a 9140 (40 ports) and a 9506 (a director with 4 FC card slots) is way overkill for what you describe.

      I'd say that at the very least, you're misinformed as to wh
      • by afidel ( 530433 )
        I was talking about 3 line cards for the 9500 series, not 3 HBA's! By the time you buy the chassis, PSU's and sup(s) you have a LOT of sunk cost if you are going with less than 96 ports.
        • My bad. It makes much more sense now. My train of thought lumped you in with those who equate card with HBA, but in hindsight I see what you meant.

          A couple of things I've learned:

          If your company is buying a director and the main goal doesn't coincide with either uptime/high availability or port density, then all your company is doing is making the switch vendor's stock price go up. And, if you don't want the lowest latency, don't buy fibre channel. Always buy what fits. It does everyone a favor in the long
  • by BrianHursey ( 738430 ) on Tuesday December 18, 2007 @08:31AM (#21737744) Homepage Journal
    As we have seen with iSCSI the bandwidth capability over Ethernet just is not there. I with the EMC this will probably be great for the low end company that needs a mid tier and low tier environment. However large corporations with large database and high number of systems still need to stay with fibre frabrics. This probably will be only on the mid tier platforms like clariion.
    • by totally bogus dude ( 1040246 ) on Tuesday December 18, 2007 @08:53AM (#21737904)

      I expect you're right, but it's interesting to note they're referring to this as Fibre Channel over Ethernet, and not over IP. The reduction in overhead there (not just packet size, but avoiding the whole IP stack) might be enough to really help; and if you're running separate 10 Gigabit Ethernet for the storage subsystem (i.e. not piggy backing on an existing IP network) it might be really nice. Or at least, comparable in performance and a heck of a lot cheaper.

      On the other hand, really decent switches that can cope with heavy usage of 10-GigE without delaying packets at all aren't going to be massively cheap, and you'd need very high quality NICs in all the servers as well. Even then, fibre's still probably going to be faster than copper... but that's just something I made up. Maybe someone who knows more about the intricacies of transmitting data over each can enlighten us all?

      There was recently an article about "storing" data within fibre as sound rather than converting it to for storage in electrical components, since the latter is kind of slow; how does this compare to transmission via current over copper?

      • by afidel ( 530433 ) on Tuesday December 18, 2007 @09:08AM (#21738046)
        Latency and bandwidth are comparable for copper and fiber ethernet solutions today, the drawback to copper is you need to be within 15m of the switch. This isn't so bad in a small datacenter but in a larger facility you would either need switches all over the place (preferably in 2's for redundant path) or you'd need to go fiber which eliminates a good percentage of the cost savings. FiberChannel used to have copper as a low cost option but it's not there in the 4Gb world and even in the 2Gb space it was so exotic that there was almost no cost savings due to lack of economies of scale.
        • The 15m limit would be a drawback for most data centers and DR sites where they use mid and long distant solutions for DR and Business continuity configurations. Like SRDF and SAN Copy. Again... This would be great for mid and low tier configurations creating the capability of implementing low cost sans configurations. I know SAN capabilities with fibre. My only experience with Ethernet configurations is low end iSCSI configurations, most of the current iSCSI configurations do not use fibre drives but they
        • by guruevi ( 827432 )
          FibreChannel has a lot of copper in a lot of installations, all you need to do is get an SFP module that terminates copper instead of fiber optics. Especially for direct connects between servers and storage (Apple XRAID and Dell solutions for example) or direct connects between switches and storage in the same rack. The interconnects for large SAN's (between switches and backbones) are usually fiber though. Fiber is very expensive and the SFP's themselves are not cheap as well neither are the switches any c
        • Fibre Channel is a protocol, not a cable (that's why it's not spelled Fiber). In fact, high end systems DO have copper based FC connections. They are great for shorter runs - the EMC Clariion uses copper cables between it's disk shelves to interconnect them. The CX3 series is running 4GB end-to-end and no issues with copper interconnects.
        • by eth1 ( 94901 )
          My company has several large data centers. While the network portion is generally separated from the server portion, so that two servers next to each other in a rack might talk to each other via a switch 25m away, the SAN racks and the servers that use them are usually fairly close to each other. There's no reason why an off-net storage switch couldn't be located in the SAN rack and connected directly for most installations.

          Granted, you do lose some placement flexibility, which might be a deal-breaker in so
          • I just talked with some colleagues I was incorrect. The media is not the limit. What is the limit is the actual protocol.. So using copper vs fiber makes no different in some cases copper can be faster. However long distant solutions is another story.
    • (4) bonded NICs/server
      (1) Procurve gigE switch w/Jumbo frames turned on
      (many) SAS drives

      and we can, in production, have 4Gb throughput on our iSCSI SAN.

      Tell me again where this "throughput" is hiding?

      Regards,
    • by Chris Snook ( 872473 ) on Tuesday December 18, 2007 @09:26AM (#21738222)
      Bullshit.

      The bandwidth is there. I can get 960 Mb/s sustained application-layer throughput out of a gigabit ethernet connection. When you have pause frame support and managed layer 3 switches, you can strip away the protocol overhead of iSCSI, and keep the reliability and flexibility in a typical data center.

      The goal of this project is not to replace fibre channel fabrics, but rather to extend them. For every large database server at your High End customer, there are dozens of smaller boxes that would greatly benefit from centralized disk storage, but for which the cost of conventional FC would negate the benefit. As you've noted, iSCSI isn't always a suitable option.

      You're probably right that people won't use this a whole lot to connect to super-high-end disk arrays, but once you hook up an FCoE bridge to your network, you have the flexibility to do whatever you want with it. In some cases, the cost benefit of 10Gb ethernet vs. 2x 4Gb FC alone will be enough motivation to use it even for very high-end work.
      • by Huh? ( 105485 )
        I've never heard of anyone getting 960Mb/s with iSCSI out of a single gig ethernet link.
        • Oops, I was vague. My results were with UDP NFS, which is much simpler to tune. As you noted in your reply, it's possible to tune iSCSI to similar performance levels, but doing so without sacrificing latency is rather difficult. My point was that simpler protocols (like FCoE) make it much easier to get the most out of the hardware.

          For what it's worth, the NFS server in my testing was using Fibre Channel storage.
        • by Znork ( 31774 )
          I max out at 960Mb/s with iSCSI over gigabit $15 realtek cards with a $150 dlink switch. With out of the box iSCSI enterprise target software on Linux, to a client running OpeniSCSI (eh, or whatever it is that's shipped in RedHat by default). Over substandard cabling, on top of that. (Fer sure, by then the iSCSI server has cached the data in-mem, but anyway.)

          So I'd really have to wonder what anyone failing to get that is running. I hope they're not paying for it.

          Sure, non-cached performance against the IDE
          • Re: (Score:2, Interesting)

            by myz24 ( 256948 )
            I think you could follow up with some info about your setup. I mean, there is no way you're getting those speeds without tuning some network parameters or with some serious CPU and RAID setup. It's not that I don't believe you, I have a buddy that has done the same but with NFS but he's using an opensolaris system with TCP offloading cards and a heck of a RAID array.
            • Re: (Score:3, Interesting)

              by Znork ( 31774 )
              "those speeds without tuning some network parameters or with some serious CPU and RAID setup."

              Basic setup is approximately this; CPU's for both servers and clients range between AMD XP 3500+ to AMD X2 4800+. Motherboards are Asus (Nvidia 550 and AMD690) cards, with 2-4GB memory plus an extra SATA card on the iSCSI servers, and extra rtl8168/9 gigabit cards (the forcedeth driver has some issues). Disks on the iSCSI servers are striped with LVM, but not to more than 3 spindles (I dont care that much about max
    • The vast majority of SANs are not bandiwdth-bound, they are bound by the aggregate random IO throughput of all the spindles. We have a SAN in which each module added to the SAN adds 2 GBps of bandwidth. Each module has 4-12 spinles. With 6 mondules, the SAN has 12 GBps of bandwidth available to servers, all clustered and load-balanced. With 40 modules, that's 80 GBps. I don't think even the highest end Fiber Channel SANs can compete with that from a bandwidth perspective.

      Fiber Channel will be dead in less t
  • I'm not a datacenter kind of guy, so help me out. If you've got 10 G Ethernet, then why would you want to run FC rather than iSCSI?

    Can someone elaborate?

    • by cerelib ( 903469 )
      Here is the simple version.

      iSCSI is for implementing a "direct attached storage device" using an IP network (Internet/internet/intranet) as the backbone.

      FCoE does not involve IP and is simply a lower cost, possibly better (time will tell), way of replacing optical fabric in data centers.
    • iSCSI adds a lot of protocol overhead, and tuning it to work well with a given application and network load becomes quite difficult above gigabit speed. When you're using a fairly reliable transport, such as FC or Ethernet with pause frames, you can dispense with that, and get near-optimal throughput and latency with very little tuning.

    • I'm not a datacenter kind of guy, so help me out. If you've got 10 G Ethernet, then why would you want to run FC rather than iSCSI?

      I'm not a datacenter guy either, but I am a programmer.

      My guess is simply just avoiding the IP stack. I'd guess an IP stack would add some latency, definitely adds some overhead, and most implementations are unlikely to be well optimized for extremely high bandwidth links (10 Gbit/sec).

      FCoE avoids the IP stack entirely. If done properly, it can avoid all of the above problems.
      • Okay now I'm confused. If you're avoiding the IP stack entirely, where does crossing subnets come into play?
        • Okay now I'm confused. If you're avoiding the IP stack entirely, where does crossing subnets come into play?
          I guess they'll just have to cross that bridge when they come to it.
          • Huh? Last time I checked, a bridge is not for crossing subnets. In fact, a bridge doesn't even operate at layer 3 at all. Or do you mean some other type of bridge?
    • Re: (Score:2, Insightful)

      by Joe_NoOne ( 48818 )
      Some important limitations of iSCSI :

      1) TCP/IP doesn't guarantee in-order delivery of packets (think of stuttering with streaming media, etc...)

      2) Frame sizes are smaller and have more overhead than Fibre Channel packets.

      3) Most NICs rely on the system to encapsulate & process packets - a smart NIC [TCP Ofload Engine card] costs almost as much as a Fibre Channel card.
    • by Znork ( 31774 )
      Well, basically, this is how it works:

      Yer olde FC product salesman has a much better commission, as FC products have far, far higher margins than ethernet products. Therefore the FC saleman buys you better lunches and invites you to seminars with more free booze, while displaying his company produced graphs over how cutting edge lab FC hardware vastly outperforms iSCSI served by a PC from last century.

      In your booze addled state you find this reasonable, and refrain from using google or performing actual tes
  • AoE is awesome, it is cheap, it is simple. 8 page RFC. The only SAN protocol you can really understand completely in one sitting.

    http://en.wikipedia.org/wiki/ATA_over_Ethernet [wikipedia.org]

    And combine it with Xen or other virtualization technology and you have a really slick setup:

    http://xenaoe.org/ [xenaoe.org]
  • It was announced almost 20 days back on lkml.
    And the summary is incorrect in saying Intel has just announced.

    Looks like either the /. editors are lousy buffons who do not care to click on the links to match the article summary or it is someone from Intel who is(are) trying to make sure that OpenFCoE gets some press.

    doh... bad ,very bad journalism on part of slashdot.
    Please do not be osnews, atleast check your articles for chirst's sake.

C makes it easy for you to shoot yourself in the foot. C++ makes that harder, but when you do, it blows away your whole leg. -- Bjarne Stroustrup

Working...