High capacity NAS - Hardware list - yay or nay?

Discussion in 'Hardware' started by Delphinus, Sep 15, 2011.

  1. Delphinus New Member

    Member Since:
    Sep 15, 2011
    Message Count:
    3
    Likes Received:
    0
    Trophy Points:
    0
    Delphinus, Sep 15, 2011

    Greetings fellow NASers,

    I am looking to build a from scratch 30+TB NAS and have put together the following hardware list. I would greatly appreciate any feedback and/or questions, comments or concerns on this list. Warnings and "gotchas" are also welcomed. I hate getting sucker punched.

    I realize the hardware is probably a bit overkill, please ignore that. What I'm looking for is a medium performance NAS that is STABLE, ie I really, really don't want to lose data. I currently have a 12TB RAID setup in a Ubuntu HTPC environment and it's pretty much dead. Was able to save most of the data but this is the second time it's crashed and I don't want to lose data again.

    Purpose of NAS: To store media files (TV, Movies (HD) and Music) to be played via either a Boxee or my current HTPC.

    The hardware:
    Disks: 12 x HITACHI Deskstar 5K3000 HDS5C3020ALA632 (0F12117) 2TB SATA 6.0Gb/s 3.5" Internal
    12 x WD 2TB Green Drives

    Motherboard: MSI 890FXA-GD70 AM3 AMD 890FX SATA 6Gb/s USB 3.0 ATX AMD Motherboard

    CPU: AMD Athlon II X2 250 Regor 3.0GHz Socket AM3 65W Dual-Core Desktop Processor

    RAM: G.SKILL Ripjaws Series 16GB (4 x 4GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800)

    PSU: KINGWIN Lazer Platinum Series LZP-550 550W ATX 12V v2.2 / EPS 12V v2.91 / SSI EPS 12V v2.92 SLI Ready CrossFire Ready 80 ...

    SAS Card x3: LSI MegaRAID Internal Low-Power SATA/SAS 9240-8i 6Gb/s PCI-Express 2.0 RAID Controller Card, Single

    Case: NORCO RPC-4224 4U Rackmount Server Case with 24 Hot-Swappable SATA/SAS Drive Bays

    *Questions I have: Will these LSI 9240 cards work with FreeNAS 8.x? Does anyone know if they will support 3TB drives?
    Are these decent enough SAS cards to handle 24 drives? Will the PCI bus cause bottlenecking? Also, the RAM is Quad Channel but the Motherboard only supports Dual Channel RAM, with the RAM just run at Dual Channel or will it not be compatible at all?

    I plan to boot FreeNAS off of CF from the IDE slot so all 24 drives will be dedicated to the NAS.

    After doing some research and talking with friends, I am going to run 3 pools of 8 drives each to minimize rebuild times if there is a failure and to ensure that I can lose multiple disks without loss of data. The pools will be RAIDZ2 pools.

    I've done a lot of reading on these forums but haven't really found this specific hardware setup and would really like to get some opinions/expertise before I start shelling out the cash.

    Thanks in advance for everyone's help, it's greatly appreciated.

    -Del
  2. SnorreSelmer New Member

    Member Since:
    Sep 7, 2011
    Message Count:
    19
    Likes Received:
    0
    Trophy Points:
    0
    SnorreSelmer, Sep 17, 2011

    I don't know if wether the SAS-cards will work, but the PCIe bus should NOT cause bottlenecks unless you run multiple Gbit NICs.

    I haven't heard of Quad Channel RAM before (then again, I don't follow the hardware scene closely anymore), but I know that what Dual Channel and Tripple Channel RAM meant was that the motherboard used RAM in pairs or threes (ie. you had to have 2/4 stick of RAM in a DC mobo and 3/6 sticks in a TC mobo. Running a TC mobo with two sticks would fail).

    My FreeNAS server is run off a 4GB USB-stick and it works great. With FreeNAS 8 they recommend you run it off a USB-stick or flash-card because FreeNAS 8 can't share HDD space between OS and storage.

    As for the storage-pools, splitting it like that is quite smart. If it's possible, striping the three arrays together would give you some serious speed while the underlying raidz would keep each node in the stripe protected from failiure.
    Then again, I have a straigt up raidz and you can see the performance in my signature.. For home use I find it to be PLENTY adequate.

    Oh, I just remembered.. There is a significant difference in how RAID and ZFS rebuilds arrays. RAID rebuilds the entire disk surface while ZFS only rebuilds the files. That means ZFS rebuilds data faster because it doesn't rebuild empty space.
  3. Milhouse Super Moderator

    Member Since:
    Jun 1, 2011
    Message Count:
    555
    Likes Received:
    4
    Trophy Points:
    18
    Milhouse, Sep 17, 2011

    The LSI 9211-8i works fine out of the box with 8.x, the only problem is no automatic disk spin down as the ATA idle spin down commands are not supported by the mps0 driver, but I've got a daemon for that now (an early version of which is here let me know if you're interested in the updated version).

    Your 9240 SAS cards are PCIe v2 cards, using up to 8x lanes, and in a PCIe v2 motherboard each PCIe lane will provide 500MBytes (5Gbits) of bandwidth, so as long as you are dropping these cards into v2 8x lane slots you're unlikely to run short of bus bandwidth with 8 drives attached. However if you are using 4x or 1x PCIe slots then although these cards will still work, the total bandwidth will be significantly reduced - and with a 1x lane you will certainly notice the lack of bandwidth.

    Also, if your motherboard supports only PCIe v1, you will have to halve the lane bandwidth figures as PCIe v1 lanes max out at 250Mbytes/2.5Gbits per lane.

    I believe the LSI 6Gbit/sec cards all support 3TB+ drives (the older 3Gbit/sec cards do not/will not).

    With your planned setup, you will of course saturate a single GigE NIC two or three times over... but at least any resilvers (ZFS resync's) will complete in as short a time as possible, reducing the time your vdevs will spend degraded.

    Any particular reason for booting from CF? A 2GB USB memory stick is usually sufficient.

    I would be tempted to go for a slightly heftier PSU, about 700W, which may result in slightly better efficiency.

    I presume you mean a single zpool consisting of 3 RAIDZ2 vdevs each with 8 drives? If so, sounds very sensible! :)
  4. Delphinus New Member

    Member Since:
    Sep 15, 2011
    Message Count:
    3
    Likes Received:
    0
    Trophy Points:
    0
    Delphinus, Sep 17, 2011

    SnorreSelmer, thanks for the reply. Glad to hear that the PCI-E bus won't be a bottleneck. I combed through the FreeNAS hardware compatibility list some more and found some posts where people used LSI 9211-8i cards and had very good success with them. So I'm going to go with those cards. I also decided to split up the drive purchases from different manufactures to limit the possibility of a mass crash if I got a bad batch. That's good info about RAID and ZFS as well. Thanks.

    Milhouse, thanks for your reply. You are correct in presuming that I meant a single zpool consisting of 3 vdevs! Glad you agree on the sensibility of it. It's nice to have a hardware config validated by those in the know!

    Thanks again for the replies!
  5. RaynMan New Member

    Member Since:
    Aug 12, 2011
    Message Count:
    12
    Likes Received:
    0
    Trophy Points:
    0
    RaynMan, Sep 19, 2011

    I have the LSI 9260-8i on a Supermicro Xeon board and it works fine in v7.2. I still need to test with v8. Can try later and let you know. I do not see a problem with it working in v8.
  6. Delphinus New Member

    Member Since:
    Sep 15, 2011
    Message Count:
    3
    Likes Received:
    0
    Trophy Points:
    0
    Delphinus, Sep 20, 2011

    Thanks RaynMan, I appreciate the info. Looking forward to getting my rig built and up in running.

Share This Page