ZFS volume error after upgrading from 8.0.1 release to 8.0.2

Discussion in 'Installation' started by JeremyEm, Oct 18, 2011.

  1. Offline

    JeremyEm

    Member Since:
    Sep 24, 2011
    Messages:
    6
    Message Count:
    6
    Likes Received:
    0
    Trophy Points:
    1
    JeremyEm, Oct 18, 2011

    I have a new FreeNAS box that I just built and have been running in a test environment. Last night I upgraded from 8.0.1 release to 8.0.2 and afterwards, I have an alert and my ZFS volume is hosed up.

    I have a 5 drive, 7.1TB volume setup that had about 500GB of data on it (all replicated elsewhere for now). The attached image shows my error (sorry for the tiny size of the image).

    Screen shot 2011-10-18 at 6.50.10 AM.jpg

    I'm new to FreeNAS and have very little BSD experience, so I'm not sure what to do to resolve this.
  2. Offline

    William Grzybowski FreeNAS Guru

    Member Since:
    May 27, 2011
    Messages:
    1,661
    Message Count:
    1,661
    Likes Received:
    23
    Trophy Points:
    38
    Location:
    Curitiba, Brazil
    William Grzybowski, Oct 18, 2011

    From CLI try the following:

    # zpool status
    # zpool import

    Paste output of both, thanks
  3. Offline

    JeremyEm

    Member Since:
    Sep 24, 2011
    Messages:
    6
    Message Count:
    6
    Likes Received:
    0
    Trophy Points:
    1
    JeremyEm, Oct 18, 2011

    [root@FIREBALL2] ~# zpool status
    no pools available
    [root@FIREBALL2] ~# zpool import
    pool: Test
    id: 16946581391153097148
    state: UNAVAIL
    status: One or more devices are missing from the system.
    action: The pool cannot be imported. Attach the missing
    devices and try again.
    see: http://www.sun.com/msg/ZFS-8000-3C
    config:

    Test UNAVAIL insufficient replicas
    raidz2 UNAVAIL insufficient replicas
    ada0p2 UNAVAIL cannot open
    ada1p2 UNAVAIL cannot open
    ada2p2 UNAVAIL cannot open
    ada3p2 UNAVAIL cannot open
    ada4p2 ONLINE

    pool: Test
    id: 14556436343218882027
    state: ONLINE
    action: The pool can be imported using its name or numeric identifier.
    config:

    Test ONLINE
    raidz1 ONLINE
    ada0p2 ONLINE
    ada1p2 ONLINE
    ada2p2 ONLINE
    ada3p2 ONLINE
    ada5p2 ONLINE
    [root@FIREBALL2] ~#
  4. Offline

    JeremyEm

    Member Since:
    Sep 24, 2011
    Messages:
    6
    Message Count:
    6
    Likes Received:
    0
    Trophy Points:
    1
    JeremyEm, Oct 18, 2011

    Here is a screenshot of my disks. They all seem to be online and ok.
    Screen shot 2011-10-18 at 8.45.11 PM.jpg
  5. Offline

    JeremyEm

    Member Since:
    Sep 24, 2011
    Messages:
    6
    Message Count:
    6
    Likes Received:
    0
    Trophy Points:
    1
    JeremyEm, Oct 18, 2011

    I'm not sure why it shows a raidz2 in there. I only created a raidz1 volume.
  6. Offline

    JeremyEm

    Member Since:
    Sep 24, 2011
    Messages:
    6
    Message Count:
    6
    Likes Received:
    0
    Trophy Points:
    1
    JeremyEm, Oct 18, 2011

    so I can run "zpool import 14556436343218882027" and it imports and clears the alert message, but the volume still shows the message

    "Test /mnt/Test None (Error) Error getting available space Error getting total space"

    When I did the import command, it did report back

    "cannot mount '/Test': failed to create mountpoint"

    If I do "zpool status" I now get

    ===========================================
    [root@FIREBALL2] ~# zpool status
    pool: Test
    state: ONLINE
    scrub: scrub in progress for 0h18m, 64.64% done, 0h10m to go
    config:

    NAME STATE READ WRITE CKSUM
    Test ONLINE 0 0 0
    raidz1 ONLINE 0 0 0
    ada0p2 ONLINE 0 0 0
    ada1p2 ONLINE 0 0 0
    ada2p2 ONLINE 0 0 0
    ada3p2 ONLINE 0 0 0
    ada5p2 ONLINE 0 0 0

    errors: No known data errors
    ===========================================

    It's scrubbing because I told it to so I could see if other commands worked against the volume.

    I tried rebooting after doing this once and after the reboot it came up with no volumes and the same thing that started after the upgrade.

    I don't want to run the "zpool destroy" command because it sees both the raidz1 and raidz2 as "Test." I have no idea where the raidz2 came from. I have made several test volumes, but all of them were raidz1.
  7. Offline

    JeremyEm

    Member Since:
    Sep 24, 2011
    Messages:
    6
    Message Count:
    6
    Likes Received:
    0
    Trophy Points:
    1
    JeremyEm, Oct 18, 2011

    I gave up worrying about it. It looks like it was some old data on the drive that was from some previous testing. Still not sure why it thought it was raidz2.

    I ran "dd if=/dev/zero of=/dev/ada0 bs=1M count=10" (changing ada0 to match each drive) to zero out the beginning of each drive and then verified there were no stale volumes. I then rebuilt a new volume (the old one was a test one with junk data on it).

    I'm dumping a few hundred gigs of data to it and tomorrow I will reboot it to make sure it all comes up ok and intact. For a little while this will be a test environment anyways with data being replicated from my home production NAS.

Share This Page