Howto: Freenas 8 on ESX5i with SAS2008 HBA and multiple CPUs and Multiple Nics

Discussion in 'Installation' started by ozra, Feb 28, 2012.

  1. Offline

    ozra

    Member Since:
    Feb 28, 2012
    Messages:
    1
    Message Count:
    1
    Likes Received:
    0
    Trophy Points:
    0
    ozra, Feb 28, 2012

    My First post, constant lurker.......
    Hopefully this helps someone else and my reasons for wanting to do the above are clarified below.

    I have had multiple problems with the SAS2008 HBA and Freenas 8.0x (FreeBSD 8.2) running under ESX5i as a VM.
    Being a FreeBSD noob did not help!

    All credit for solving it to the Web and Google! EVENTUALLY!!!!


    The cards (LSI SAS2008: IBM M1015 / Dell H200 + SAS Expander) work fine under Freenas 8 on a standalone PC.
    I could not get them to work under Freenas 8.0.x under ESX5 (Passed Through), ALTHOUGH, ESX5i could work with the card without any problems whatsoever.
    I did not want to add a VMFS layer in-between ESX5 and Freenas ZFS.
    OpenIndiana can also access the card fine under ESX5 (passed though) but I have already gone down the Freenas road with about 30TB of storage (Norco 24 Drive with Chenbro CK23601 6GB/s 36 Port SAS Expander)

    So I figured that it’s the Freebsd SAS2008 Driver.

    Please note that the Chenbro CK23601 card played no role in my problems experienced. My problems started before I even attached it to the SAS HBA.

    Running an AMD FX 8150 with 16GB of RAM and a 4 port Intel Network card (LAGG on a managed switch) just for Freenas seemed like a bit of a waste.

    My ESX server with all my other Virtual Machines was underutilized.
    As I was running ESX anyway for SABNzb/Couchpotato/Sickbeard/Win Domain/ Asterix/ Exchange and various other SIP servers with multiple snapshot capability and configs I wanted to utilize it better.



    Installing Freenas 8.0.x under ESX5i with the SAS2008 card passed through (VMDirectpath) produced the following errors during Freenas boot:

    run_interrupt_driven_hooks: still waiting after 60 seconds for xpt_config mps_startup
    run_interrupt_driven_hooks: still waiting after 120 seconds for xpt_config mps_startup

    and it never gets past that stage.

    To solve that problem:
    Shut down the VM
    Edit the Freenas VM Settings.
    Detach the SAS Card.
    Boot Freenas.

    Edit the loader.conf
    Add the following to it:

    hw.pci.enable_msi="0" # Driver Interrupts problem SAS2008
    hw.pci.enable_msix="0" # Driver Interrupts problem SAS2008

    Shut down the VM
    Edit the Freenas VM Settings.
    Add the PassThrough SAS Card. (Your reserved memory should be the same value as the memory allocated to the VM (VMDirectpath requirement) otherwise it will not boot.
    Boot Freenas.

    Problem Solved!

    If you use Multiple NICS for lagg with MTU 9000 you may have to add the following (I have 4 NICS) to loader.conf (or loaders in 8.0.3 p1)
    kern.ipc.nmbclusters="262144" # Network Buffers Problem with MTU9000 and Multiple NICS

    Otherwise not all my NICS are able to be utilized due to buffer constraints. (And a lot of my Jumbo frames get thrown away)


    Under ESX5i, if you add more than 1 CPU, I encounter “Interrupt Storm on IRQx” (IRQ18) in my case.
    To solve that, boot into Freenas VM Bios: Disable Floppy drive, Com ports, Printer ports and anything else that you are not going to use.
    Save changes.
    Reboot
    Solved!




    STATS:

    ESX5i

    Freenas 8.0.3 p1

    Virtual Machine: Freebsd 64 Bit: 16GB RAM: 2 vCPU of 1 socket each: E1000 VMWare adapter

    Passthru PCIe Card: Dell H200 Flashed to LSI 9211_8i (P12) Firmware (Chipset: SAS2008)


    For Testing I used some old drives:

    4 x 750GB WD Green Drives in Striped UFS config:

    [root@freenas8-test] /mnt# dd if=/dev/zero of=/mnt/raid0/testfile bs=8192k count=1000
    1000+0 records in
    1000+0 records out
    8388608000 bytes transferred in 25.843491 secs (324592679 bytes/sec)
    [root@freenas8-test] /mnt# dd if=/dev/zero of=/mnt/raid0/testfile bs=8192k count=10000
    10000+0 records in
    10000+0 records out
    83886080000 bytes transferred in 271.635798 secs (308818207 bytes/sec)


    3 x 750GB WD Green Drives in Raidz1 ZFS config:

    [root@freenas8-test] ~# dd if=/dev/zero of=/mnt/test1-raidz1/testfile bs=8192k count=1000
    1000+0 records in
    1000+0 records out
    8388608000 bytes transferred in 48.232895 secs (173918817 bytes/sec)
    [root@freenas8-test] ~# dd if=/dev/zero of=/mnt/test1-raidz1/testfile bs=8192k count=10000
    10000+0 records in
    10000+0 records out
    83886080000 bytes transferred in 536.209856 secs (156442630 bytes/sec)



    4 x 750GB WD Green Drives in Raidz1 ZFS config:

    [root@freenas8-test] /mnt/test1-raidz1# dd if=/dev/zero of=/mnt/test1-raidz1/testfile bs=8192k count=1000
    1000+0 records in
    1000+0 records out
    8388608000 bytes transferred in 29.539966 secs (283974871 bytes/sec)
    [root@freenas8-test] /mnt/test1-raidz1# dd if=/dev/zero of=/mnt/test1-raidz1/testfile bs=8192k count=10000
    10000+0 records in
    10000+0 records out
    83886080000 bytes transferred in 378.921389 secs (221381222 bytes/sec)

    Samba Transfer rate: 60MB/s Write and 75MB/s Read with AIO on: Read size: 8192 / Write size: 8192

    1.png

    2.png

    3.png

    4.png

    5.png

    6.png

    7.png
  2. Offline

    ProtoSD FreeNAS Guru

    Member Since:
    Jul 1, 2011
    Messages:
    3,359
    Message Count:
    3,359
    Likes Received:
    7
    Trophy Points:
    38
    Location:
    Leaving FreeNAS
    ProtoSD, Feb 28, 2012

    Nice job! Thanks for taking the time to put that together and post it here!
  3. Offline

    marcusone

    Member Since:
    Jun 8, 2012
    Messages:
    2
    Message Count:
    2
    Likes Received:
    0
    Trophy Points:
    0
    marcusone, Jun 8, 2012

    Great guide, thank you!

    Now how do you enable/install Vmware Tools so that you can gracefully shutdown the system?
  4. Offline

    arryo Newbie

    Member Since:
    May 5, 2012
    Messages:
    42
    Message Count:
    42
    Likes Received:
    0
    Trophy Points:
    6
    arryo, Jun 26, 2012

    for that purpose you don't need vmware tools, it's already in freenas.
  5. Offline

    rhymer

    Member Since:
    Jul 6, 2012
    Messages:
    3
    Message Count:
    3
    Likes Received:
    0
    Trophy Points:
    0
    rhymer, Jul 7, 2012

    Thank you! I had the same problem with SuperMicro X9SCM-F and LSI 9211-8i controller. Your solution solved this! A tip could be that it is possible to add the values as "tunables" in the FreeNAS WebGUI:

    Variable: hw.pci.enable_msi
    Value: 0

    Variable: hw.pci.enable_msix
    Value: 0
  6. Offline

    sy5tem

    Member Since:
    Mar 4, 2012
    Messages:
    18
    Message Count:
    18
    Likes Received:
    0
    Trophy Points:
    0
    sy5tem, Nov 5, 2012

    hello!

    thank you for those tips!

    i am moving my freenas (was pentium-D with 4gb ddr2(max)) to a new machine, amd fx-8320(8 core) 32gb ddr3, using 2x ibmm1015 (flashed it without boot rom) having 4x2tb/nfs/cifs/shares -raidz- , 3x1tb/htpc scsi-raidz- 4x320gb/iscsi/nfs/vm storage data-raidz- and 2xssd c300 64gb/vm's os -raid0-

    everything was working after your nice tip, but now im getting error, (freenas lost device - 0 outstanding 1 refs) on da10 (2tb storage) i need to fully reboot the esxi to get freenas back, but as soon as disk i/o append it commes back.
    when i get home i will try and remove 4x2tb to see if freenas can stay alive....

    the worst thing is that when a disk disconnect for those unknown reason freenas freeze, and now it seem's as frozen my full esxi setup (pfsense not responding , )... need t o push the button when i get back home.

    will post development.
  7. Offline

    vanhaakonnen Newbie

    Member Since:
    Sep 9, 2011
    Messages:
    30
    Message Count:
    30
    Likes Received:
    0
    Trophy Points:
    6
    vanhaakonnen, Nov 15, 2012

    I have a FreeNAS 8.3 installed on a dedicated server with four gbit nics in a lagg. On the other side there are two esxi servers with a NFS-datastore on the freenas box. Sometimes my esxi servers can´t find the nfs volume on the freenas server. Could this be a solution for my problem? Is it still necessary on 8.3?
  8. Offline

    seer_tenedos

    Member Since:
    Sep 12, 2011
    Messages:
    27
    Message Count:
    27
    Likes Received:
    0
    Trophy Points:
    1
    seer_tenedos, Jan 15, 2013

    I tried your solution and it actually not the best way to do it. If you set the settings you suggest you will generally have issues with more than 1 cpu or core and samba will use up a lot of the cpu for a low transfer rate.

    Instead just set
    hw.pci.enable_msix="0" # Driver Interrupts problem SAS2008
    and leave hw.pci.enable_msi set to the default of 1. This will allow the cards to use the much faster interrupt system of pcie 2.0 which will greatly improve performance but still disable the interupt system from pcie 3.0 which is causing the issue. As a bonus you will also be able to add more cpus and cores to the vm, use 10gb ethernet controllers and get samba speeds of over 150MB/s read and 300MB/s write on a small array with SSD and samba will only use < 5% cpu as long as you use VMXNET3 driver from vmware. I followed this guide to do that. http://forums.freenas.org/showthread.php?9316-VMXNET3-support

    I am sure i can get more out of the system if i tuned it but i am happy with those speeds for now.
  9. Offline

    Tysken

    Member Since:
    Oct 30, 2012
    Messages:
    21
    Message Count:
    21
    Likes Received:
    0
    Trophy Points:
    1
    Tysken, Jan 19, 2013

    Thank you seer_tenedos!!!!
    I have been running freenas inside esxi for a wile now with 2 x SAS2008 hba´s and have been restricted to 1 cpu 1 core because of the irq storm problem.
    After removing the hw.pci.enable_msi=0 tunable everything works great with more than 1 cpu.
    Using the vmxnet3 nic sounds very interesting, i will give that a try to.
    Thanks again for sharing that info!!

Share This Page