Re: Queuing of dm-raid1 resyncs to the same underlying block devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




No, lvm/dm-raid does not queue synchroniation.

If multiple raid1/4/5/6/10 LVs have their image LVs share the same
PV, resyncs will happen in parallel on all respective RAID LVs if requested.

E.g. (see Cpy%Sync field of the created 2 raid1 LVs):
LV VG Attr LSize SSize SRes Cpy%Sync Type #Cpy #Str Stripe SSize PE Ranges r1 r Rwi-a-r--- 512.00m 128 18.75 raid1 2 2 0.03m 128 r1_rimage_0:0-127 r1_rimage_1:0-127 [r1_rimage_0] r iwi-aor--- 512.00m 128 linear 1 0m 128 /dev/sdf:1-128 [r1_rimage_1] r iwi-aor--- 512.00m 128 linear 1 0m 128 /dev/sdg:1-128 [r1_rmeta_0] r ewi-aor--- 4.00m 1 linear 1 0m 1 /dev/sdf:0-0 [r1_rmeta_1] r ewi-aor--- 4.00m 1 linear 1 0m 1 /dev/sdg:0-0 r2 r Rwi-a-r--- 512.00m 128 31.25 raid1 2 2 0.03m 128 r2_rimage_0:0-127 r2_rimage_1:0-127 [r2_rimage_0] r iwi-aor--- 512.00m 128 linear 1 0m 128 /dev/sdf:130-257 [r2_rimage_1] r iwi-aor--- 512.00m 128 linear 1 0m 128 /dev/sdg:130-257 [r2_rmeta_0] r ewi-aor--- 4.00m 1 linear 1 0m 1 /dev/sdf:129-129 [r2_rmeta_1] r ewi-aor--- 4.00m 1 linear 1 0m 1 /dev/sdg:129-129


Though there's no automatic queueing of (initial) resynchronizations,
you can create the 2 LVs sharing the same PVs with the "--nosync" option, thus preventing immediate resynchronization and then "lvchange --syncaction repair r-r1", wait for it to
finish and "lvchange --syncaction repair r-r2"  afterwards.

Or create all but 1 LV with "--nosync", wait for the one to finish before using lvchange
to start resynchronization.

BTW:
When you create a raid1/4/5/6/10 LVs _and_ never read what you have not written, "--nosync" can be used anyway in order to avoid the initial resynchronization load on the devices. Any data written in that case will update all mirrors/raid redundancy data.


Heinz


On 09/30/2015 03:22 PM, Brassow Jonathan wrote:
I don’t believe it does.  dm-raid does use the same RAID kernel personalities as MD though, so I would think that it could be added.  I’ll check with Heinz and see if he knows.

  brassow

On Sep 26, 2015, at 10:49 AM, Richard Davies <richard@xxxxxxxxxxxx> wrote:

Hi,

Does dm-raid queue resyncs of multiple dm-raid1 arrays, if the underlying
block devices are the same?

Linux md has this feature, e.g.:

# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid1 sda2[2] sdb2[1]
      943202240 blocks [2/1] [_U]
      [====>................]  recovery = 21.9% (207167744/943202240)
      finish=20290.8min speed=603K/sec
      bitmap: 1/8 pages [4KB], 65536KB chunk

md0 : active raid1 sda1[2] sdb1[1]
      67108736 blocks [2/1] [_U]
        resync=DELAYED
      bitmap: 1/1 pages [4KB], 65536KB chunk


After some time investigating, I can't find it in dm-raid.

Please can someone tell me if this is implemented or not?

If it is implemented, where should I look to see it happening?

Thanks,

Richard.

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel




[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux