Re: mdadm: /dev/md0 has been started with 1 drive (out of 2).

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08/11/13 17:32, Ivan Lezhnjov IV wrote:
Hello,

so I've successfully rebuilt the array, added internal bitmap, haven't run any extensive i/o tests but I continued with copying of my data off the old disks and I haven't really noticed a serious impact. This is a first impression only, but so far so good.

Now that I have bitmap I deliberately repeated sleep/resume cycle exactly as it was done the last time that led to array degradation and sure enough the system started up with a degraded array. in fact, it is way more messy this time because both devices were dynamically assigned new /dev/sdx devices: before sleep they were /dev/sdc1 and /dev/sdd1, after resume they became /dev/sdd1 and /dev/sdb1.
I think this is a different issue, raid is not responsible for device discovery and mapping to device names. I think udev may provide a solution to this where it will ensure each device identified by some distinct hardware feature (eg, serial number), will be configured as a specific device name. I use this often for ethernet devices, but I assume something similar is applicable to disk drives.

So, I unmounted filesystem on the array, and stopped the array. Then reassembled it, and it looks to be in a good shape. However, I am wondering if this is exactly due to the internal bitmap. Basically what surprised me was that the array was assembled and shown as in sync instantly. Worth noting, I should say that before the laptop went to sleep there were no processes writing to the array disks -- I made sure -- so the data should be consistent on both drives, but as we know from my very first message event count may be still different when upon resume from sleep.
Yes, given that there were no writes to the array, (or minimal writes, probably there is always something), then the re-sync would have been so quick you would not have noticed it. As mentioned, it can easily be completed in a second... for me, it is often a minute or two due to a lot of writes happening during the bootup process.
My question is basically if I'm enjoying the benefits of having internal bitmap or maybe I got lucky and this time event count was the same for both drives?
The only way to know for sure would be to examine the drives during the bootup process before the raid array is assembled....

You might see some information in the logs about the resync.

Regards,
Adam

--
Adam Goryachev
Website Managers
P: +61 2 8304 0000                    adam@xxxxxxxxxxxxxxxxxxxxxx
F: +61 2 8304 0001                     www.websitemanagers.com.au

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux