Re: Deleting mdadm RAID arrays

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Marcin Krol wrote:
Thursday 07 February 2008 03:36:31 Neil Brown napisał(a):

   8     0  390711384 sda
   8     1  390708801 sda1
   8    16  390711384 sdb
   8    17  390708801 sdb1
   8    32  390711384 sdc
   8    33  390708801 sdc1
   8    48  390710327 sdd
   8    49  390708801 sdd1
   8    64  390711384 sde
   8    65  390708801 sde1
   8    80  390711384 sdf
   8    81  390708801 sdf1
   3    64   78150744 hdb
   3    65    1951866 hdb1
   3    66    7815622 hdb2
   3    67    4883760 hdb3
   3    68          1 hdb4
   3    69     979933 hdb5
   3    70     979933 hdb6
   3    71   61536951 hdb7
   9     1  781417472 md1
   9     0  781417472 md0
So all the expected partitions are known to the kernel - good.

It 's not good really!!

I can't trust /dev/sd* devices - they get swapped randomly depending on sequence of module loading!! I have two drivers, ahci for onboard
SATA controllers and sata_sil for additional controller.

Sometimes the system boots ahci first and sata_sil later, sometimes in reverse sequence. Then, sda becomes sdc, sdb becomes sdd, etc.
It is exactly the problem that I cannot rely on kernel's information which
physical drive is which logical drive!

Then
  mdadm /dev/md0 -f /dev/d_1

will fail d_1, abort the recovery, and release d_1.

Then
  mdadm --zero-superblock /dev/d_1

should work.

Thanks, though I managed to fail the drives, remove them, zero superblocks and reassemble the arrays anyway. The problem I have now is that mdadm seems to be of 'two minds' when it comes to where it gets the info on which disk is what part of the array.
As you may remember, I have configured udev to associate /dev/d_* devices with
serial numbers (to keep them from changing depending on boot module loading sequence).
Why do you care? If you are using UUID for all the arrays and mounts does this buy you anything? And more to the point, the first time a drive fails and you replace it, will it cause you a problem? Require maintaining the serial to name data manually?

I miss the benefit of forcing this instead of just building the information at boot time and dropping it in a file.

Now, when I swap two (random) drives in order to test if it keeps device names associated with serial numbers I get the following effect:

1. mdadm -Q --detail /dev/md* gives correct results before *and* after the swapping:

% mdadm -Q --detail /dev/md0
/dev/md0:
[...]
    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/d_1
       1       8       17        1      active sync   /dev/d_2
       2       8       81        2      active sync   /dev/d_3

% mdadm -Q --detail /dev/md1
/dev/md1:
[...]
    Number   Major   Minor   RaidDevice State
       0       8       49        0      active sync   /dev/d_4
       1       8       65        1      active sync   /dev/d_5
       2       8       33        2      active sync   /dev/d_6


2. However, cat /proc/mdstat gives shows different layout of the arrays!

BEFORE the swap:

% cat mdstat-16_51
Personalities : [raid6] [raid5] [raid4]
md1 : active raid5 sdb1[2] sdf1[0] sda1[1]
      781417472 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

md0 : active raid5 sde1[2] sdc1[0] sdd1[1]
      781417472 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>


AFTER the swap:

% cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md1 : active(auto-read-only) raid5 sdd1[0] sdc1[2] sde1[1]
      781417472 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

md0 : active(auto-read-only) raid5 sda1[0] sdf1[2] sdb1[1]
      781417472 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>

I have no idea now if the array is functioning (it keeps the drives
according to /dev/d_* devices and superblock info is unimportant)
or if my arrays fell apart because of that swapping. And I made *damn* sure I zeroed all the superblocks before reassembling the arrays. Yet it still shows the old partitions on those arrays!
As I noted before, you said you had these on whole devices before, did you zero the superblocks on the whole devices or the partitions? From what I read, it was the partitions.

--
Bill Davidsen <davidsen@xxxxxxx>
 "Woe unto the statesman who makes war without a reason that will still
be valid when the war is over..." Otto von Bismark


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux