QUESTION Permanent errors have been detected

Discussion in 'FreeNAS 4 N00bs' started by Brian Buchanan, Aug 22, 2013.

  1. Brian Buchanan New Member

    Member Since:
    Aug 14, 2013
    Message Count:
    16
    Likes Received:
    3
    Trophy Points:
    3
    Brian Buchanan, Aug 22, 2013

    I'm testing FreeNAS 9.1 on an old server.

    I have a 4 1-TB Drives connected to a hardware RAID, an Intel SRCS16 in a RAID-10. The server is an Intel SE7520JR2 with 4-GB of ECC RAM.

    I have a zpool srcs16a0 on the one drive FreeNAS sees, amrd0.

    I've been copying 800+ GB of data on the server and I'm starting to get "Permanent Errors" on the ZFS pool and I can't guess where they are coming from.

    The md5 of the files match the source, so the files are intact and correct.

    The SRCS16 controller hasn't indicated any trouble with the array and the ECC memory hasn't reported any memory errors. Could I still be looking at a hardware error?

    If the file doesn't actually have any errors, why is ZFS claiming there are errors?

    I'm currently running a second scrub.

    Code (text):
    1. [root@freenas3119] ~brian/Brian# zpool status -v
    2.   pool: srcs16a0
    3. state: ONLINE
    4. status: One or more devices has experienced an error resulting in data
    5.         corruption.  Applications may be affected.
    6. action: Restore the file in question if possible.  Otherwise restore the
    7.         entire pool from backup.
    8.   see: http://illumos.org/msg/ZFS-8000-8A
    9.   scan: scrub in progress since Thu Aug 22 09:01:48 2013
    10.         43.6G scanned out of 778G at 48.9M/s, 4h16m to go
    11.         644K repaired, 5.60% done
    12. config:
    13.  
    14.         NAME                                          STATE    READ WRITE CKSUM
    15.         srcs16a0                                      ONLINE      0    0    2
    16.           gptid/02d09ce0-0aab-11e3-9882-00110a546e6a  ONLINE      0    0  612  (repairing)
    17.  
    18. errors: Permanent errors have been detected in the following files:
    19.  
    20.         srcs16a0/home/brian:<0x0>
    21.         srcs16a0/home/brian:<0xf93b>
    22.         /mnt/srcs16a0/home/brian/Brian/Music/mp3/Chris Cagel
    23.         /mnt/srcs16a0/home/brian/Brian/Pictures/My Pictures/2013/2013-05-09/Video 2013-05-09 8 08 34 PM.mov
    24.         /mnt/srcs16a0/home/brian/Brian/Pictures/Old/My Pictures/2011/2011-07-21
    25.         /mnt/srcs16a0/home/brian/Brian/HomeMovies/Raw/2006-05-28 Tape11/Tape11-2006.05.28_15-49-18.dv
    26.         /mnt/srcs16a0/home/brian/Brian/HomeMovies/Raw/2006-05-28 Tape11/Tape11-2006.05.28_16-18-37.dv
    27.         /mnt/srcs16a0/home/brian/Brian/HomeMovies/Raw/2006-05-28 Tape12/Tape12-2006.05.28_16-38-23.dv
    28.         /mnt/srcs16a0/home/brian/Brian/HomeMovies/Raw/2006-06-17 Tape13/Tape13-2006.06.17_10-22-07.dv
    29.         /mnt/srcs16a0/home/brian/Brian/Pictures/My Pictures/2013/2013-08-17/2013-08-17 at 19.16.06.jpg
    30.  


    The first two entries are files I deleted and copied over again before starting the first scrub. The rest of the errors appeared after the first scrub and I've compared the MD5 and they match.
    Code (text):
    1. # Source (A Drobo FS)
    2. # md5sum Tape11-2006.05.28_15-49-18.dv Tape11-2006.05.28_16-18-37.dv Tape11-2006.05.28_16-23-18.dv
    3. 4368dbd5fecb02f2becb94f9bea1f7c7  Tape11-2006.05.28_15-49-18.dv
    4. 59e147835905d25ea5da69da6e3a3732  Tape11-2006.05.28_16-18-37.dv
    5. bb75071c2c88280143b95c903f50f8a6  Tape11-2006.05.28_16-23-18.dv
    6.  
    7. # Destination - FreeNAS
    8. [brian@freenas3119 ~/Brian/HomeMovies/Raw/2006-05-28 Tape11]$ md5 Tape11-2006.05.28_15-49-18.dv Tape11-2006.05.28_16-18-37.dv Tape11-2006.05.28_16-23-18.dv
    9. MD5 (Tape11-2006.05.28_15-49-18.dv) = 4368dbd5fecb02f2becb94f9bea1f7c7
    10. MD5 (Tape11-2006.05.28_16-18-37.dv) = 59e147835905d25ea5da69da6e3a3732
    11. MD5 (Tape11-2006.05.28_16-23-18.dv) = bb75071c2c88280143b95c903f50f8a6
    12.  
  2. Brian Buchanan New Member

    Member Since:
    Aug 14, 2013
    Message Count:
    16
    Likes Received:
    3
    Trophy Points:
    3
    Brian Buchanan, Aug 22, 2013

    I'm trying a divide and conquer approach. I've removed 2-GB RAM and connected two drives to the onboard SATA controller and left two on the SRCS16. I created a mirror on the two internal and a H/W mirror on the SRCS16. I'm now copying the data onto both pools and I'll see what happens.
  3. cyberjock Forum Guard Dog/Admin

    Member Since:
    Mar 25, 2012
    Message Count:
    10,299
    Likes Received:
    447
    Trophy Points:
    83
    cyberjock, Aug 22, 2013

    So why are you mixing hardware RAID with ZFS RAID? That's one of the biggest "Don't do this to ZFS" in the manual. You did read the manual, right?
  4. Brian Buchanan New Member

    Member Since:
    Aug 14, 2013
    Message Count:
    16
    Likes Received:
    3
    Trophy Points:
    3
    Brian Buchanan, Aug 22, 2013

    The SRCS16 doesn't support JBOD. that's basically the reason, that, and I'm still testing. Yesterday I setup 6 RAID-0 Arrays, each with one drive. Then I created a RAIDZ2 across the 6 "arrays". Today I replaced four the 250GB drives with 1TB drives and thought I'd try a H/W RAID when I built it up.

    I have not yet seen any errors from the ZFS Mirror on the two drives now connected to the motherboard's SATA controllers, nor on the two-drive h/w raid, but I haven't gotten close to finishing copying the 800GB test dataset.

    I have found a Firmware update for the SRCS16, so I'm going to update that and see if they added JBOD as an option. Otherwise I'll go back to one-drive arrays and test that way again.
  5. cyberjock Forum Guard Dog/Admin

    Member Since:
    Mar 25, 2012
    Message Count:
    10,299
    Likes Received:
    447
    Trophy Points:
    83
    cyberjock, Aug 22, 2013

    Then I'd be looking at new hardware bro. You are just asking for nothing but problems going with ZFS and hardware RAID. The two shouldn't ever be mixed.

    One drive arrays is better. I used it once for 2 months and I'd never do it again. Too many problems as ZFS is supposed to be your exclusive RAID, not the hardware RAID and ZFS.
  6. cyberjock Forum Guard Dog/Admin

    Member Since:
    Mar 25, 2012
    Message Count:
    10,299
    Likes Received:
    447
    Trophy Points:
    83
    cyberjock, Aug 22, 2013

    As it currently stands you have no ZFS protection with using hardware RAID. You could use UFS though, as its better suited for hardware RAID arrays.

Share This Page