Disk UNAVAIL eventhough it's online

Discussion in 'Storage' started by stuom, Jan 12, 2012.

  1. stuom New Member

    Member Since:
    Jan 12, 2012
    Message Count:
    10
    Likes Received:
    0
    Trophy Points:
    0
    stuom, Jan 12, 2012

    I was planning to migrate to Freenas 8 from 0.7 but then looked at current ZFS pool status

    pool: tank
    state: DEGRADED
    status: One or more devices could not be used because the label is missing or
    invalid. Sufficient replicas exist for the pool to continue
    functioning in a degraded state.
    action: Replace the device using 'zpool replace'.
    see: http://www.sun.com/msg/ZFS-8000-4J
    scrub scrub stopped after 0h1m with 0 errors on Thu Jan 5 11:35:29 2012
    config:

    NAME STATE READ WRITE CKSUM
    tank DEGRADED 0 0 0
    raidz1 DEGRADED 0 0 0
    ad4 ONLINE 0 0 0
    ad6 ONLINE 0 0 0
    ad10 ONLINE 0 0 0
    ad12 ONLINE 0 0 0
    9708325104572171644 UNAVAIL 0 0 0 was /dev/ad14
    ad16 ONLINE 0 0 0

    errors: No known data errors


    All disks seem to be ONLINE according to Status|Disks.

    I cannot replace the drive from CLI.
    [sami@nassi /]$ zpool replace tank 9708325104572171644
    cannot open '9708325104572171644': no such GEOM provider
    must be a full path or shorthand device name
    [sami@nassi /]$ zpool replace tank ad14
    cannot use '/dev/ad14': must be a GEOM provider or regular file
    [sami@nassi /]$ zpool replace tank /dev/ad14
    cannot use '/dev/ad14': must be a GEOM provider or regular file

    How to fix pool? Or would it be safe to just attach drives to FreeNAS 8 machine and do Auto-Import?
  2. William Grzybowski Active Member

    Member Since:
    May 27, 2011
    Message Count:
    1,657
    Likes Received:
    21
    Trophy Points:
    38
    Location:
    Curitiba, Brazil
    William Grzybowski, Jan 12, 2012

    Atr you sure the disk ad14 didnt fail?
    The correct would be zpool replace 9708325104572171644 /dev/ad14

    post the output of: # sysctl kern.disks
  3. stuom New Member

    Member Since:
    Jan 12, 2012
    Message Count:
    10
    Likes Received:
    0
    Trophy Points:
    0
    stuom, Jan 13, 2012

    What do you mean by failing? There has been something wrong with either the power cord or the S-ATA connection but I haven't previously had a sitation where pool has remained in DEGRADED state after reboot.

    [sami@nassi /]$ zpool replace tank 9708325104572171644 /dev/ad14
    cannot use '/dev/ad14': must be a GEOM provider or regular file

    [sami@nassi /]$ sysctl kern.disks
    kern.disks: da0 ad16 ad14 ad12 ad10 ad6 ad4
  4. stuom New Member

    Member Since:
    Jan 12, 2012
    Message Count:
    10
    Likes Received:
    0
    Trophy Points:
    0
    stuom, Jan 31, 2012

    Ok. I think I'll try formatting the drive and then plug it back in.

    What procedure do you recommend?
    Is there a way to format a single drive with GUI? Or CLI?
    What do do before formatting and what after in order to get raidz resilvered after attaching the formatted drive?
    I know many of you recommend doing a backup first but unfortunately it's not an option this time. That's why I'm trying to find safest possible way to get raidz back to ONLINE from its DEGRADED state.
  5. William Grzybowski Active Member

    Member Since:
    May 27, 2011
    Message Count:
    1,657
    Likes Received:
    21
    Trophy Points:
    38
    Location:
    Curitiba, Brazil
    William Grzybowski, Jan 31, 2012

    To honest I don't know why the command didn't work, looks like a bug in this old ZFS version...
    You can try booting 8 and replace it there via CLI, I don't think format is necessary
  6. stuom New Member

    Member Since:
    Jan 12, 2012
    Message Count:
    10
    Likes Received:
    0
    Trophy Points:
    0
    stuom, Feb 1, 2012

    Ok. Do you think booting the same hardware to FreeNAS 8 and then doing an auto import is safe?
    Even though the pool is in degraded state.

    Should the pool be first exported in FreeNAS 7?
    Are there any other requirements?
  7. William Grzybowski Active Member

    Member Since:
    May 27, 2011
    Message Count:
    1,657
    Likes Received:
    21
    Trophy Points:
    38
    Location:
    Curitiba, Brazil
    William Grzybowski, Feb 2, 2012

    Yes I think it is... export in freenas 7 is not a requirement but it is good to do so...
    Thats about it... and hope so zfs v15 can do the job... =)
  8. stuom New Member

    Member Since:
    Jan 12, 2012
    Message Count:
    10
    Likes Received:
    0
    Trophy Points:
    0
    stuom, Feb 16, 2012

    I updated to 8.0.1 (as I had it previously burned to DVD) but the server kept shutting down by itself. I did an auto-import but didn't have time (before shutdown) to check if everything was allright.
    I then upgraded to to 8.0.3 p1
    Now the server doesn't shut down but there is stll something wrong with the pool.

    zpool status on CLI gives
    no pool available

    zpool import gives:

    pool: tank
    id: 15914800062545038301
    state: FAULTED
    status: One or more devices contains corrupted data.
    action: The pool cannot be imported due to damaged devices or data.
    The pool may be active on another system, but can be imported using
    the '-f' flag.
    see: http://www.sun.com/msg/ZFS-8000-5E
    config:

    tank FAULTED corrupted data
    raidz1 ONLINE
    ada0 ONLINE
    ada1 ONLINE
    ada2 ONLINE
    ada3 ONLINE
    9708325104572171644 UNAVAIL corrupted data
    ada5 ONLINE

    On GUI under View Disks I see all disks but ada4 shows identifier as
    {devicename}9708325104572171644
    instead of
    {serial}WD-WCAV58981613
    etc. on other disks

    What do you recommend?
    Should I take the faulty disk and check if it's working? How?
    Or could I just unformat it and plug back in? How?
  9. William Grzybowski Active Member

    Member Since:
    May 27, 2011
    Message Count:
    1,657
    Likes Received:
    21
    Trophy Points:
    38
    Location:
    Curitiba, Brazil
    William Grzybowski, Feb 16, 2012

    Try PC-BSD 9, openindiava/opensolaris, or freebsd 9 and then import the pool there...
    Something went wrong pool, those OSs have got a newer version of ZFS that may fix it...
  10. stuom New Member

    Member Since:
    Jan 12, 2012
    Message Count:
    10
    Likes Received:
    0
    Trophy Points:
    0
    stuom, Feb 16, 2012

    I have seen that suggestion before, but I was wondering, what then?
    Can I just after (successful) import in oi shut down the server and reboot it to FreeNAS 8 hoping the pool is fixed?
    The import with newer version of ZFS does no alterations to pool that would prevent it being imported with an earlier version of zfs?
  11. William Grzybowski Active Member

    Member Since:
    May 27, 2011
    Message Count:
    1,657
    Likes Received:
    21
    Trophy Points:
    38
    Location:
    Curitiba, Brazil
    William Grzybowski, Feb 16, 2012

    Yes

    Shouldn't, unless you manually use "zpool upgrade"
  12. stuom New Member

    Member Since:
    Jan 12, 2012
    Message Count:
    10
    Likes Received:
    0
    Trophy Points:
    0
    stuom, Feb 25, 2012

    zpool import tank
    in a system with zfs v28 still leaves me with pool, with one device (/dev/ada4) being UNAVAIL.
    The server starts a scrub but shuts down in a middle of it.

    Is there a command in CLI I could try to replace that one disk and start a resilver? Or do I have to detach the disk first and unformat it somehow in order for zfs to recognize it as a "new" disk?
    zpool replace tank {long id} /dev/ada4
    gives me
    invalid vdev specification
    use '-f' to override following errors:
    /dev/ada4 is part of an active pool 'tank'
  13. William Grzybowski Active Member

    Member Since:
    May 27, 2011
    Message Count:
    1,657
    Likes Received:
    21
    Trophy Points:
    38
    Location:
    Curitiba, Brazil
    William Grzybowski, Feb 26, 2012

    Wait a minute.. slowly...

    Paste the output of "zpool status", "sysctl kern.disks" and "zpool import"

    The device number might have changed across kernels/versions...

    You might have to destroy the partition table in ada4 or destroy zfs metadata on it before proceed (that is if you're really sure ada4 is the right disk)
    gpart destroy -F ada4
  14. stuom New Member

    Member Since:
    Jan 12, 2012
    Message Count:
    10
    Likes Received:
    0
    Trophy Points:
    0
    stuom, Feb 26, 2012

    [sami@freenas ~]$ zpool status
    pool: tank
    state: ONLINE
    status: One or more devices could not be used because the label is missing or
    invalid. Sufficient replicas exist for the pool to continue
    functioning in a degraded state.
    action: Replace the device using 'zpool replace'.
    see: http://www.sun.com/msg/ZFS-8000-4J
    scan: scrub in progress since Sat Feb 25 16:42:11 2012
    1.09T scanned out of 4.42T at 198M/s, 4h53m to go
    0 repaired, 24.62% done
    config:

    NAME STATE READ WRITE CKSUM
    tank ONLINE 0 0 0
    raidz1-0 ONLINE 0 0 0
    ada0 ONLINE 0 0 0
    ada1 ONLINE 0 0 0
    ada2 ONLINE 0 0 0
    ada3 ONLINE 0 0 0
    9708325104572171644 UNAVAIL 0 0 0 was /dev/ada4
    ada5 ONLINE 0 0 0

    errors: No known data errors


    [sami@freenas ~]$ sysctl kern.disks
    kern.disks: ada5 ada4 ada3 ada2 ada1 ada0 da0

    [sami@freenas ~]$ zpool status
    [sami@freenas ~]$

    This time the ada4 disk is missing from Status | Disks in FreeNAS 0.7.5 TEST ONLY (revision 8710)
    FreeBSD 9.0-RC2 (revision 199506)

    Disk Capacity Description Device model Serial number I/O statistics Temperature Status
    n/a n/a 0.00 KiB/t, 10 tps, 0.00 MiB/s n/a MISSING
  15. stuom New Member

    Member Since:
    Jan 12, 2012
    Message Count:
    10
    Likes Received:
    0
    Trophy Points:
    0
    stuom, Feb 28, 2012

    I'd like to add, that there is one difference when the pool imported in an earlier version of FreeNAS (with earlier zpool version) its state is shown as DEGRADED whereas in Freenas 8 its state is ONLINE.

    Is there anyway to see partition table info or such and determine why one disk is shown as UNAVAIL?
  16. William Grzybowski Active Member

    Member Since:
    May 27, 2011
    Message Count:
    1,657
    Likes Received:
    21
    Trophy Points:
    38
    Location:
    Curitiba, Brazil
    William Grzybowski, Feb 28, 2012

    Ok, so booting in a newer version has fixed the issue and you can come back to 8.x...

    So, like I said, you need to wipe the zfs metadata on ada4... or try using the -f...

    zpool replace -f tank 9708325104572171644 /dev/ada4

    If that doesnt work:
    dd if=/dev/zero of=/dev/ada4 bs=1m count=1
    dd if=/dev/zero of=/dev/ada4 bs=1m oseek=`diskinfo da3|awk '{print (int($3) / (1024))-4}'`

    Then replace again

Share This Page