ZFS RAIDZ1 disk failure question

Discussion in 'Storage' started by indianajones, Jan 4, 2012.

  1. Offline

    indianajones

    Member Since:
    Jan 4, 2012
    Messages:
    3
    Message Count:
    3
    Likes Received:
    0
    Trophy Points:
    0
    indianajones, Jan 4, 2012

    I'm looking at setting up a system with 6 disks. I want to place 3 disks in a RAIDZ1 (2+1) configuration, and the other 3 disks in another RAIDZ1 (2+1). I believe I can put these two 2+1 configurations in the same ZFS Pool, giving me 4 x drive_capacity with single-drive failure in each vdev without data loss. Do I have that correct, I can put 2 vdevs in a ZFS Pool, or something like that, right?

    Assuming I'm correct, it means I can withstand 2 drive failures without data loss, as long as the two drives are not in the same vdev. What I haven't been able to determine is what happens if I do in fact have 2 drive failures in the same vdev? Do I lose all data on all 6 drives, or do I lose all data only in that vdev (i.e. I lose only half my total data)?
  2. Offline

    dmt0 Newbie

    Member Since:
    Oct 28, 2011
    Messages:
    47
    Message Count:
    47
    Likes Received:
    1
    Trophy Points:
    8
    dmt0, Jan 9, 2012

    The data in the other vdev should still be accessible, striping only happens within one RAIDZ1. But what's stopping you from setting up RAIDZ2?
  3. Offline

    indianajones

    Member Since:
    Jan 4, 2012
    Messages:
    3
    Message Count:
    3
    Likes Received:
    0
    Trophy Points:
    0
    indianajones, Jan 9, 2012

    Mainly, ignorance. I read here (http://doc.freenas.org/index.php/Hardware_Requirements#RAID_Overview) that RAIDZ2 is slower than RAIDZ1, which is already pretty slow. So I had a theory to create volumes in groups of 3 using RAIDZ1, allowing me to survive a single drive failure without losing data. So I can (in theory) have 9 drives configured as three 3-drive RADZ1 volumes and be able to survive 3 drive failures, as long as the failures are one disk per drive. And I get 6 drives' worth of storage capacity. If I had 9 drives in a RAIDZ2, what would that gain me over the configuration I just described?
  4. Offline

    b1ghen

    Member Since:
    Oct 19, 2011
    Messages:
    113
    Message Count:
    113
    Likes Received:
    0
    Trophy Points:
    0
    Location:
    Sweden
    b1ghen, Jan 10, 2012

    The capacity of 7 drives instead of 6 drives.
  5. Offline

    dmt0 Newbie

    Member Since:
    Oct 28, 2011
    Messages:
    47
    Message Count:
    47
    Likes Received:
    1
    Trophy Points:
    8
    dmt0, Jan 10, 2012

    AND you can have ANY two drives fail without data loss.
  6. Offline

    b1ghen

    Member Since:
    Oct 19, 2011
    Messages:
    113
    Message Count:
    113
    Likes Received:
    0
    Trophy Points:
    0
    Location:
    Sweden
    b1ghen, Jan 10, 2012

    oops, forgot about that :)
  7. Offline

    indianajones

    Member Since:
    Jan 4, 2012
    Messages:
    3
    Message Count:
    3
    Likes Received:
    0
    Trophy Points:
    0
    indianajones, Jan 10, 2012

    Sweet! Thanks for the info. RAIDZ2 it is.
  8. Offline

    TECK FreeNAS Aware

    Member Since:
    Jun 23, 2011
    Messages:
    298
    Message Count:
    298
    Likes Received:
    1
    Trophy Points:
    18
    TECK, Jan 13, 2012

    RaidZ2 is a must, if you have 6+ disks. I would be sweating bullets if one disk fails and I end-up rebuilding the array knowing that if another disk fails during the rebuild process, all my data is lost.
  9. Offline

    naynay

    Member Since:
    Jul 2, 2012
    Messages:
    3
    Message Count:
    3
    Likes Received:
    0
    Trophy Points:
    0
    naynay, Jul 2, 2012

    I would NEVER recommend raidz1 for ZFS, except for data you don't care about.
    If you actually care about your data, raidz2 is an absolute must.

    Why? Well, raidz1 lulls you into a false sense of security. If one of your disks die, you think you can just replace it and your pool will keep going. Just as long as you don't have two drives die, so you think, you are OK.

    Actually even with one disk death you are NOT OK.

    The reason is that unless you are scrubbing that radiz1 volume all the time (unlikely if it's a huge pool), when a disk dies and you attempt to replace it, the whole pool is going to get resilvered and checked for errors.
    Now, if during replacement, there's some data corruption discovered on the zfs pool, you have no further data backups to source from! Your backup data DIED on that drive that you replaced! You are going to suffer data loss! In fact, your drive resilvering will halt and your pool will remain degraded until you delete the bad data and start resilvering all over again.

    The lesson here is, with raidz1, there is STILL a good chance of data loss. Raidz2 and above is what you need to prevent the above very real situation from happening.
  10. Offline

    ProtoSD FreeNAS Guru

    Member Since:
    Jul 1, 2011
    Messages:
    3,358
    Message Count:
    3,358
    Likes Received:
    7
    Trophy Points:
    38
    Location:
    Leaving FreeNAS
    ProtoSD, Jul 2, 2012

    Actually the same thing can happen with raidz2, I've seen it happen several times here in the forums, BUT, you are right. The chances of recovery are better.

    I would say the lesson learned is to scrub your pools often, every two weeks?

    Yeah, I know it sucks waiting for it to scrub, but consider the situation you just described above....
  11. Offline

    naynay

    Member Since:
    Jul 2, 2012
    Messages:
    3
    Message Count:
    3
    Likes Received:
    0
    Trophy Points:
    0
    naynay, Jul 2, 2012

    For data loss to occur with raidz2, no doubt it is still possible, but the chances of it happening approach the realm of "lottery winner" statistics. Yes, it is still possible to have two drives die at the same time, and/or experience a damaged block of data being corrupt across two copies in exactly the same place.... but it's highly improbable. Perhaps your disks are of very bad quality and from the same bad batch.

    Corruption from raidz1 when a drive is gone is still well within the realm of a real possibility. I know this from personal experience from the exact incident I described above. Fortunately the data I lost was replaceable, but from then on I lost all faith in raidz1.
  12. Offline

    ProtoSD FreeNAS Guru

    Member Since:
    Jul 1, 2011
    Messages:
    3,358
    Message Count:
    3,358
    Likes Received:
    7
    Trophy Points:
    38
    Location:
    Leaving FreeNAS
    ProtoSD, Jul 2, 2012

    However unlikely it may seem, it HAS happened to several users here in the forums. I'm not one of them, but I have tried to assist those users and can tell you *shit happens*.

    There's actually a known issue with ZFS v15 I believe, I don't have the link handy, but it's posted in one of the threads of one of those unfortunate individuals.
  13. Offline

    naynay

    Member Since:
    Jul 2, 2012
    Messages:
    3
    Message Count:
    3
    Likes Received:
    0
    Trophy Points:
    0
    naynay, Jul 2, 2012

    Would be interesting to see that thread. I wonder if it occured from two drives dying before a replacement, or if it was data loss due to corruption that was unable to be fixed.

Share This Page