"Unsupportable block size" for disks in ESXi

Discussion in 'Storage' started by millenix, Jan 6, 2013.

  1. Offline

    millenix

    Member Since:
    Jan 6, 2013
    Messages:
    4
    Message Count:
    4
    Likes Received:
    0
    Trophy Points:
    0
    millenix, Jan 6, 2013

    Hello everybody,

    I just installed FreeNAS-8.3.0-RELEASE-p1-x64 on my HP MicroServer N40L and created a RAID-Z2 (forced 4k sectors) using 5x 3 TB WD30EFRX (WD Red) HDDs.
    Now I want to run FreeNAS as guest operating system in VMware ESXi 5.1 and access the volume from within the VM.
    I created a physical RDM (vmkfstools -z) following http://blog.davidwarburton.net/2010/10/25/rdm-mapping-of-local-sata-storage-for-esxi/ and assigned the virtual disks to the VM.
    They show up as disks but i get "Unsupportable block size" errors in the console. Switching to virtual RDM (vmkfstools -r) doesn't work due to VMware limitations. There's no S.M.A.R.T. data available, too, so it wouldn't be an option.
    I talked to some guy in the VMware channel who got the same problem and found this thread http://forums.nas4free.org/viewtopic.php?f=16&t=1020&start=20 with a similar problem in NAS4Free.
    Any ideas appreciated.

    Regards,

    Thomas
  2. Offline

    jgreco Resident Grinch

    Member Since:
    May 29, 2011
    Messages:
    3,571
    Message Count:
    3,571
    Likes Received:
    342
    Trophy Points:
    83
    Location:
    WHO(1)ville, Unixland
    jgreco, Jan 6, 2013

    What exactly is the point of all this? You have additional space for ESXi datastores in your N40L or something?
  3. Offline

    cyberjock Forum Guard Dog/Admin

    Member Since:
    Mar 25, 2012
    Messages:
    13,115
    Message Count:
    13,115
    Likes Received:
    658
    Trophy Points:
    113
    cyberjock, Jan 6, 2013

    Ok.. so let me be blunt(and maybe sound like a jerk.. sorry if I do).

    The HP N40L I believe only supports 8GB of RAM. You shouldn't be using less than 6GB for FreeNAS, so why do you want to do this at all? (I think this is why jgreco is asking the question).

    Also, if you do alot of research you'll figure out that if your data is important you shouldn't be doing RDM for your disks with FreeNAS. One of many things that go horribly wrong is you lose SMART. I told some guy off a week or two ago in another thread because he didn't get it. The manual states:

    By virtualizing with ESXi you are removing the exclusive disk access.

    I tried to dabble with ESXi and jgreco(bunch of PMs back and forth) and finally gave up. While I could have gotten it working with RDM, the risk wasn't worth the potential reward. When ESXi goes bad it seems to go horribly bad and take out disks with it.

    My thumbrule in the forums is this "If you aren't capable of setting up ESXi by yourself and understanding any errors that may come up on your own without google searching then you shouldn't be using FreeNAS on ESXi". There's a boatload of reasons why this can go very badly, but feel free to search for "ESXi" on the forum and see other people's comments. They range from "disks suddenly disappear" to "I pulled a disk and FreeNAS had no clue a disk was 'failed'".
  4. Offline

    millenix

    Member Since:
    Jan 6, 2013
    Messages:
    4
    Message Count:
    4
    Likes Received:
    0
    Trophy Points:
    0
    millenix, Jan 7, 2013

    Thanks for your replies.
    I'm using 16 GB of ECC RAM in my HP N40L and so far there are no problems with it.
    I wanted to compare some setups (non-virtualized vs virtualized) and check performance so I needed to access to my RAID-Z2 from within FreeNAS in ESXi.
    Would be really neat to have a storage setup regardless if I use virtualization or not.
    But thanks for pointing out that there are some issues. I'll have a deeper look at it today. Thought there was maybe some driver update for the storage subsystem that is passed from ESXi to the guest OS or something like that. Nevertheless if there IS a possibility to have RDM working somehow i'd be happy if you have a link or something for me.
  5. Offline

    cyberjock Forum Guard Dog/Admin

    Member Since:
    Mar 25, 2012
    Messages:
    13,115
    Message Count:
    13,115
    Likes Received:
    658
    Trophy Points:
    113
    cyberjock, Jan 7, 2013

    Is there a greater than 0% chance of you getting it to work? yes.

    Would I ever recommend someone use RDM if access to the data is important(for instance, in a business environment)? Absolutely not.

    Would I ever recommend someone use RDM without complete and very throughly updated backups? Absolutely not.

    The problem is if you are trying to use FreeNAS it is most likely for ZFS and its bada**-ness. But by going to RDM you are instantly breaking alot of stuff in ZFS as well as FreeNAS/FreeBSD. For reliability you are probably better off looking for something else that is better designed for use in a virtualized environment. ZFS just wasn't designed with any thought for virtualization.

    This forum is filled with people that have had servers go from working fine to unrecoverable zpools just by rebooting FreeNAS. So no, I don't even bother trying to help with RDM or even worry about how one could "effictively" get RDM to work because there is no "effectively". It works until it decides not to work. And when it decides not to work it will go bad real fast. The other guy got all upset and basically told me I was a jerk for not helping him, but none of the ESXi wizards made even a single post in that thread because they've seen so many bad installations they don't bother trying to help. So don't expect alot of help with doing something that the ESXi wizards would call "stupid". I'm definitely no ESXi wizard and I'll let you jump right off the bridge if you want, I've warned you. But don't expect me to throw a rope when I told you jumping was a bad idea.

    The powers that be in the forum don't bother helping people do ill-conceived and ill-executed things with their FreeNAS installation.
  6. Offline

    millenix

    Member Since:
    Jan 6, 2013
    Messages:
    4
    Message Count:
    4
    Likes Received:
    0
    Trophy Points:
    0
    millenix, Jan 7, 2013

    I don't want to loose my data, so I'll follow your advice. All data is still on another machine. Right now everything is a big playground for me, nothing more.
    Could you tell me the reason for this "Unsupportable block size" messages and the procedure to overcome the issue (via PM, if you like)?
  7. Offline

    cyberjock Forum Guard Dog/Admin

    Member Since:
    Mar 25, 2012
    Messages:
    13,115
    Message Count:
    13,115
    Likes Received:
    658
    Trophy Points:
    113
    cyberjock, Jan 7, 2013

    Not a clue how to fix it. Never heard of the error before. But I wouldn't expect much of a response from the ESXi gurus because you definitely won't be getting that error if you are using PCIe passthrough(which is pretty much the only recommended way to use ESXi with FreeNAS).
  8. Offline

    millenix

    Member Since:
    Jan 6, 2013
    Messages:
    4
    Message Count:
    4
    Likes Received:
    0
    Trophy Points:
    0
    millenix, Jan 7, 2013

    The N36L/N40L doesn't support DirectPath I/O so that is not an option. Will stick to a multiboot then and no virtualization if I really want multiple OS to access zfs.
  9. Offline

    jgreco Resident Grinch

    Member Since:
    May 29, 2011
    Messages:
    3,571
    Message Count:
    3,571
    Likes Received:
    342
    Trophy Points:
    83
    Location:
    WHO(1)ville, Unixland
    jgreco, Jan 7, 2013

    Minor correction: The N40L supports 16GB the same way many Atoms support 8GB... undocumented-but-works. 16GB is big enough to be ESXi-useful and you can probably find some trite VM's that you could run alongside FreeNAS.

    Right line of thinking, wrong specifics.

    Problem 1:

    ESXi requires a datastore on which to maintain VM data files and disk images, and that cannot be self-hosted on a virtualized NAS. Chicken-and-egg. ESXi will not use a USB disk or USB flash for datastore, so my question was summarized precisely by what I asked. The N40L has five bays total. It *is* possible to add another controller card such as a BR10i or M1015 or even a plain SATA controller for ESXi to use for datastores, and there's space in the N40L case for an extra 2.5" disk or two if you use tape. However, in general, the N40L will be quite small to host 5 full size disks PLUS extra hardware for ESXi datastores.

    Problem 2:

    HP MicroServer does not support PCI passthrough, which is a damn shame.

    Fundamentally, FreeNAS is a real nice system and a real nice concept. However, the USB flash thing has its ups and downs. For a home user, it's probably pretty great. For us, it's real inconvenient to have a USB flash on a system that's half a continent away, and I've seen some failures where the flash has somehow become corrupt... it's great that the data on the FreeNAS server is well protected, but the FreeNAS system itself is poorly protected against failures. Updating it remotely (remember the 1GB->2GB size bump?) or replacing possibly failed devices is a pain.

    That's where ESXi could really shine. Stick in an inexpensive ESXi-supported RAID controller. Throw in some small SSD's. Suddenly you have awesome-fast boot (no more minutes to load FreeNAS over USBv1!) AND it is redundant AND you can do installs and upgrades easily. With ESXi you basically have a large supply of USB keys that you can switch around without touching the hardware, and you can even run more than one at a time, for that annoying (and inevitable) case where you forgot to set something up and you wish you could have both the old and new server running at the same time so you could see just what you did last time.

    That plus the other positives of virtualization make me understand why someone would try to do this. However, from a practical point of view, the Microserver is a poor platform for ESXi. It is lacking many of the features that would make for an awesome hypervisor platform.

    Anyways, we're an ESXi4 shop here, so I have no comment on RDM (an ESXi5 feature) other than I'd think it could be made to work. But we've seen a lot of people come through here with tears for their shattered data eaten by their questionable virtualization platform. I'm pretty convinced that the only safe and sane way to virtualize FreeNAS is by starting out with a hardware platform that is absolutely positively designed for virtualization, which means a modern Dell/IBM/HP/Supermicro with a proven and tested server-grade VT-d implementation, the correct Xeon CPU, and the correct PCI-e hardware. noobsauce80 and I spent a little time trying to get ESXi+FreeNAS up and running on something that theoretically supported it, which turned into kind of a horrifying scenario where it kind-of sort-of seemed to work (worked fine, then didn't work, etc). It was bad. I see strong, compelling reasons for making sure that not only the storage devices but also the storage controllers are owned by the FreeNAS kernel - passing the controller itself in seems to be best.

    Me, I'm paranoid, and so I've designed a FreeNAS server that also happened to be fully supported by ESXi. Lots of experimentation had already convinced me that our N36L is a waste of watts, because for only ~10 watts more a Xeon E3-1230 platform provides better performance while also being nearly idle serving files under load. So our old 1U storage servers have been getting upgraded to X9SCI-LN4F's with 32GB, E3-1230, and a M1015 in crossflashed IR mode to give ESXi a RAID1 datastore for boot and FreeNAS VM. The difference is that you can actually load up some heavier VM's on the unused capacity of the Xeon and get virtualization-style efficiencies.

    So I definitely understand why people want to run ESXi. However, as much as I might like virtualization, the fact remains that noobsauce80, myself, and many others have seen both the relatively few successes (usually with higher end enterprise grade server gear) and the many failures, most especially including the many people who have entirely lost their pools when something went awry.

    And I tell you all of this so that you have the full context of why I'm about to say this:

    If you have an N40L, it makes a great (if somewhat underpowered) FreeNAS box. Set it up, stick it in a corner, and leave it the hell alone. But don't make it more complicated. It is a poor ESXi platform.

Share This Page