XFS or MDADM issue?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Greetings.  I do not know where to inquire next for my issue and I would appreciate it if anyone could suggest something to potentially help regarding XFS.  I have already submitted this bug report to mdadm on 5/31 (http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=675394#5), with no updates as of yet.  

Please read my bug report and let me know if I should be considering this a potential XFS issue.  Thank you for any help you may lend and your time!

Bottom line: since growing my raid array, and switching raid levels using MDADM, I am unable to successfully copy a file to my array.  The resulting file always has different md5sum values from the original file, and is corrupted. 

In addition to this, it appears that all files on my raid array have different md5sums after this procedure.

File System: XFS
mdadm --detail /dev/md127
------------------------------------------------------------------------------------------------------------------------------------
/dev/md127:
        Version : 0.90
  Creation Time : Sun Jan 25 22:44:41 2009
     Raid Level : raid6
     Array Size : 8790815616 (8383.58 GiB 9001.80 GB)
  Used Dev Size : 1465135936 (1397.26 GiB 1500.30 GB)
   Raid Devices : 8
  Total Devices : 8
Preferred Minor : 127
    Persistence : Superblock is persistent

    Update Time : Mon May  7 00:51:55 2012
          State : clean
 Active Devices : 8
Working Devices : 8
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : d861d754:1a1065fe:c230666b:5103eba0
         Events : 0.386680

    Number   Major   Minor   RaidDevice State
       0       8      129        0      active sync   /dev/sdi1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8      113        3      active sync   /dev/sdh1
       4       8       17        4      active sync   /dev/sdb1
       5       8       96        5      active sync   /dev/sdg
       6       8       80        6      active sync   /dev/sdf
       7       8       64        7      active sync   /dev/sde
--------------------------------------------------------------------------------------------------------------------------------------------

This raid re-arrangement was done the following way:

1) Started with 5 devices in raid 5 array, functioning correctly. 
2) <Shutdown -h>
3) Installed 3 additional hard disks, all same manufacturer and size, however newer firmware. I verified the firmware was compatible via the manufacturers website.
4) Booted up, determined device names
5) successfully Added the drives to the array, by :<mdadm /dev/md127 --add /dev/sdg> etc.  
6) successfully grew the array <mdadm --grow /dev/md127 -n 8 -l 6>, took a very long time as expected
7) successfully resized the file system <xfs_growfs -d /dev/md127>

Then, I started noticing problems with files on the array.  Specifically, the md5sums from the files on the raid array no longer matched the original md5sums from files copied to the array prior to the above commands.

I figured I would confirm the problem with the following test procedure:

1) mkdir Test_Files/
2) dd if=/dev/urandom bs=1024 count=5000000 of=5GB_Rand,
   dd if=/dev/zero bs=1024 count=5000000 of=5GB_Zero
3) md5sum test:

Original Test Files created on Boot Drive

blair@debian:~$ md5sum /home/blair/Test_Files/5GB_Rand 
0fed0abb19ea7962830e54108631ddac  /home/blair/Test_Files/5GB_Rand

blair@debian:~$ md5sum /home/blair/Test_Files/5GB_Zero
20096e4b3b80a3896dec3d7fdf5d1bfc  /home/blair/Test_Files/5GB_Zero

4) Copied files to /dev/md127 raid array
5) md5sum test on copied files:

blair@debian:~$ md5sum /mnt/movies/Test_Files/5GB_Rand 
419175a78977007f3d5e97dcaf414b61  /mnt/movies/Test_Files/5GB_Rand

blair@debian:~$ md5sum /mnt/movies/Test_Files/5GB_Zero
5846bed2b52532719d4812172a8078ce  /mnt/movies/Test_Files/5GB_Zero

6) Copied files from md/127 array back to Boot Drive
7) md5sum test:

blair@debian:~$ md5sum /home/blair/Test_Files/Test_Files_Copy/5GB_Rand 
419175a78977007f3d5e97dcaf414b61  /home/blair/Test_Files/Test_Files_Copy/5GB_Rand

blair@debian:~$ md5sum /home/blair/Test_Files/Test_Files_Copy/5GB_Zero
5846bed2b52532719d4812172a8078ce  /home/blair/Test_Files/Test_Files_Copy/5GB_Zero


I am not sure what I have done in the process that would have created this problem.  I am filing this bug report to mdadm because I have successfully grown my raid in the past, using mdadm with the XFS filesystem. In that instance it was from 4 devices to 5 devices, both on raid 5 (no level switch), and growing the xfs filesystem accordingly.  These file problems started as soon as I executed the recent grow to 8 devices and level 6 as indicated above.  

In order to install three extra drives, I installed an internal Sata host adapter, as my board only had 6 Sata ports integrated.  Not sure this is relevant.  





 






-Blair Sonnen

blair.sonnen@xxxxxxxxx
801.696.4353




_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs

[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux