Infiniband SAN

Sharky Forums


Results 1 to 11 of 11

Thread: Infiniband SAN

  1. #1
    Great White Shark
    Join Date
    Nov 2000
    Location
    Alpharetta, Denial, Only certain songs.
    Posts
    9,925

    Infiniband SAN

    I've got a dream, a dream started by browsing Supermicro's website.

    Namely, I'm interested in creating a "Blade storage cluster". 1 7U Blade enclosure with the server nodes in it. This enclosure has a Quad DDR Infiniband switch in it (20Gbps). I want to use the Supermicro CSE-846E1's to create storage nodes, 24 drives in 4U each. From what I understand, I'm basically setting up a server, putting an Infiniband HCA (or TCA) into it, and connecting it to the infiniband switch.

    Do I need a head end? Or only if I'm planning on sharing storage between server nodes (blades)? Does anyone actually make a TCA? That is, does someone make a TCA that takes 4x DDR infiniband, and includes a SAS RAID controller?

    It seems like a lot of wasted power (and money) to build up a system that has the sole purpose to pass traffic from the Infiniband adapter to the RAID controller.

    Any thoughts or suggestions? Has anyone had any experience with Infiniband networks, SAN's, etc? I'd love to hear some first hand experience.

    Crusader for the 64-bit Era.
    New Rule: 2GB per core, minimum.

    Intel i7-9700K | Asrock Z390 Phantom Gaming ITX | Samsung 970 Evo 2TB SSD
    64GB DDR4-2666 Samsung | EVGA RTX 2070 Black edition
    Fractal Arc Midi |Seasonic X650 PSU | Klipsch ProMedia 5.1 Ultra | Windows 10 Pro x64

  2. #2
    Mako Shark Nater's Avatar
    Join Date
    Dec 2005
    Location
    Crawfordsville Indiana
    Posts
    3,206
    I bought a big book on SANs last week, probably going to buy another next paycheck. Honestly, I think that using Infiniband for storage networking is fairly new, as the book I bought didn't really cover it and only mentioned it in passing (the usual 'it's used in HPC...'). Just searching for 'Infiniband' on Amazon yields next to nothing.

    Are you doing this relatively soon or is this just an idea? Because CNAs are starting to creep onto the market, I don't know if FCoE requires a special switch compared to 10GbE or not. Obviously not as fast, but it's going to be hard to use 20Gb/s for just storage.
    Q6600 @ 3.6GHz (Tuniq Tower 120) - DFI Lanparty LT P35-T2R - 8GB Corsair DDR2-800 - eVGA GTX 275 SC - SoundBlaster X-Fi - Western Digital VelociRaptor 300GB - Seagate 7200.10 750GB (2) - Western Digital 1.5TB Green (2) - Western Digital 2TB Green - WINDy-Soldam MT-Pro 1700 - Antec Signature 850W- HP LP2475W (H-IPS) - Samsung 204B (TN) - Alienware Ozma 7 Headphones - Windows 7 Ultimate

  3. #3
    Great White Shark
    Join Date
    Nov 2000
    Location
    Alpharetta, Denial, Only certain songs.
    Posts
    9,925
    Quote Originally Posted by Nater View Post
    I bought a big book on SANs last week, probably going to buy another next paycheck. Honestly, I think that using Infiniband for storage networking is fairly new, as the book I bought didn't really cover it and only mentioned it in passing (the usual 'it's used in HPC...'). Just searching for 'Infiniband' on Amazon yields next to nothing.

    Are you doing this relatively soon or is this just an idea? Because CNAs are starting to creep onto the market, I don't know if FCoE requires a special switch compared to 10GbE or not. Obviously not as fast, but it's going to be hard to use 20Gb/s for just storage.
    Agreed. It is more of a "headroom" thing.

    And no, this isn't near future. I've just been researching SAN technology. I've become disillusioned with both iSCSI and FC as of late, because neither offers the scalability that I'm looking for. As for 10GbE, the adapters and switches for that are actually more expensive than the Infiniband adapters. So yeah, I'd go Infiniband before 10GbE. (Though that would be sweet on the networking side.) I just find it amusing that a lot of the sites I look at for infiniband talk about all of these awesome adapters that allow you high speed connections to your storage unit, but none of them actually offer or describe the storage units.
    Last edited by James; 07-27-2009 at 07:15 AM.

    Crusader for the 64-bit Era.
    New Rule: 2GB per core, minimum.

    Intel i7-9700K | Asrock Z390 Phantom Gaming ITX | Samsung 970 Evo 2TB SSD
    64GB DDR4-2666 Samsung | EVGA RTX 2070 Black edition
    Fractal Arc Midi |Seasonic X650 PSU | Klipsch ProMedia 5.1 Ultra | Windows 10 Pro x64

  4. #4
    Great White Shark proxops-pete's Avatar
    Join Date
    Feb 2003
    Location
    Houston, we have lift off!
    Posts
    10,316
    My high school friend works for the company as CTO on FCoE.
    I will ask him to drop in here and comment on your stuff ...

  5. #5
    Sushi
    Join Date
    Sep 2009
    Posts
    1

    Thumbs up FCoE

    Hi,
    You do need a special switch for FCoE. Today the switch for FCoE must have what is called an FCF which is provides the encapsulation/decapsulation of FCoE frames and the FCoE name server functionality.
    For more information on FCoE, see my recent blog post http://blogstu.wordpress.com/2009/09...nt-state-fcoe/
    Let me know if I can be any further help.
    Note that while InfiniBand is a nice solution for low cost, low latency cluster environments (popular in supercomputers), there are few storage solutions and the whole market has much more momentum for moving to Ethernet based options (including FCoE which will help the existing FC customers move to Ethernet).

  6. #6
    Great White Shark
    Join Date
    Nov 2000
    Posts
    21,595
    fcoe, thanks for the post and link to your blog.

    We need to resuscitate this forum with a few emerging technology topics.

    James, is this research for your lab or for a client? It sounds expensive for a lab.
    Are you thinking of a SuperBlade® server as well as the storage chassis?

  7. #7
    Mako Shark Nater's Avatar
    Join Date
    Dec 2005
    Location
    Crawfordsville Indiana
    Posts
    3,206
    I heard fibre channel over infiniband mentioned over at The Reg, but I've never been able to find any information on it.

    How exactly do they make FC work over ethernet anyway? I've always understood ethernet to be lossy, which would be a big problem for a storage network. What are they doing to the ethernet packets to deal with this?
    Q6600 @ 3.6GHz (Tuniq Tower 120) - DFI Lanparty LT P35-T2R - 8GB Corsair DDR2-800 - eVGA GTX 275 SC - SoundBlaster X-Fi - Western Digital VelociRaptor 300GB - Seagate 7200.10 750GB (2) - Western Digital 1.5TB Green (2) - Western Digital 2TB Green - WINDy-Soldam MT-Pro 1700 - Antec Signature 850W- HP LP2475W (H-IPS) - Samsung 204B (TN) - Alienware Ozma 7 Headphones - Windows 7 Ultimate

  8. #8
    Great White Shark proxops-pete's Avatar
    Join Date
    Feb 2003
    Location
    Houston, we have lift off!
    Posts
    10,316
    As fcoe (aka. Stu) pointed out, here's some Youtube video of the explanation

    EDIT: And that's fcoe, Stu, my high school friend now CFO of EMC. My only claim to even the slightest fame! LOL

    http://www.youtube.com/watch?v=EZWaOda8mVY
    Last edited by proxops-pete; 09-29-2009 at 01:04 PM.

  9. #9
    Great White Shark
    Join Date
    Nov 2000
    Location
    Alpharetta, Denial, Only certain songs.
    Posts
    9,925
    Quote Originally Posted by Nater View Post
    I heard fibre channel over infiniband mentioned over at The Reg, but I've never been able to find any information on it.

    How exactly do they make FC work over ethernet anyway? I've always understood ethernet to be lossy, which would be a big problem for a storage network. What are they doing to the ethernet packets to deal with this?
    From what I understand, they use ethernet as the physical hardware only. The actual transfer protocol (which as fcoe mentioned requires special switches) is a bit more robust than your standard 10/100/1000 type protocol.


    ua549, Yes, the blade enclosure was the origination of this little thought/planning experiment. Because it has an integrated multi-port Infiniband switch, I wanted to see what it would take to get a full blown Blade enclosure + infiniband based storage network connected and working. It's not for anything my company will actually do at the moment, it's for perhaps the future when it becomes a more viable option.


    The downside I'm finding lately is that there doesn't seem to be a company that offers a single board solution for an Infiniband connection to a SAS backplane/enclosure. It seems most of the solutions are quite literally a full blown system with a nice storage backplane (like the Supermicro 846 series), RAID card, and an Infiniband adapter. This means it's running an OS, it's got all of the normal issues that a regular system does, etc. What I really wanted to find was a board that had the infiniband adapter built in, an SAS RAID card built in, and bam, that would be it.
    Last edited by James; 10-02-2009 at 07:22 AM.

    Crusader for the 64-bit Era.
    New Rule: 2GB per core, minimum.

    Intel i7-9700K | Asrock Z390 Phantom Gaming ITX | Samsung 970 Evo 2TB SSD
    64GB DDR4-2666 Samsung | EVGA RTX 2070 Black edition
    Fractal Arc Midi |Seasonic X650 PSU | Klipsch ProMedia 5.1 Ultra | Windows 10 Pro x64

  10. #10
    Sushi
    Join Date
    Oct 2009
    Posts
    1

    infiniband storage

    The closest i could find was Mellanox MTD2000, but that's really for test/development, not a production model.

    I guess if you are technical enough you can build your own, Mellanox have the drivers and IB stack which I think are Open-Source.

    Our task is very similar to what you're describing: a telco site with a cluster of 10 modern virtualised (VMW ESX4) servers needs an extremely fast SAN for shared storage.

    Putting QDR IB cards in the servers makes for 400 gbps throughput, most of which can be dedicated for storage. If you think about it you only need 150 Intel X25-E or similar SLC-based disks. That is just 6 2U enclosures holding 25 disks each, such as HP Storageworks 70. It is very tempting, isn't it?

    Please let me know if you're closer. If you tried the Mellanox unit and found it suitable and scalable enough and decided to build a larger unit we might be able to help with components.

  11. #11
    Mako Shark Nater's Avatar
    Join Date
    Dec 2005
    Location
    Crawfordsville Indiana
    Posts
    3,206
    Last edited by Nater; 11-22-2009 at 03:16 PM.
    Q6600 @ 3.6GHz (Tuniq Tower 120) - DFI Lanparty LT P35-T2R - 8GB Corsair DDR2-800 - eVGA GTX 275 SC - SoundBlaster X-Fi - Western Digital VelociRaptor 300GB - Seagate 7200.10 750GB (2) - Western Digital 1.5TB Green (2) - Western Digital 2TB Green - WINDy-Soldam MT-Pro 1700 - Antec Signature 850W- HP LP2475W (H-IPS) - Samsung 204B (TN) - Alienware Ozma 7 Headphones - Windows 7 Ultimate

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •