Re: [PATCH v3 3/3] md/raid1: check array size before reshape

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

在 2023/07/29 20:23, Xueshi Hu 写道:
On Sat, Jul 29, 2023 at 03:37:41PM +0800, Yu Kuai wrote:
Hi,

在 2023/07/29 14:16, Xueshi Hu 写道:
On Sat, Jul 29, 2023 at 11:51:27AM +0800, Yu Kuai wrote:
Hi,

在 2023/07/29 11:36, Yu Kuai 写道:
Hi,

在 2023/07/29 11:29, Xueshi Hu 写道:
I think this is wrong, you should at least keep following:

            set_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
            set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
            md_wakeup_thread(mddev->thread);

I fail to comprehend the rationale behind the kernel's need to invoke
raid1d() repeatedly despite the absence of any modifications.
It appears that raid1d() is responsible for invoking
md_check_recovery()
and resubmit the failed io.

No, the point here is to set MD_RECOVERY_NEEDED and MD_RECOVERY_RECOVER,
so that md_check_recovery() can start to reshape.
I apologize, but I am still unable to comprehend your idea.
If mmdev->delta_disks == 0 , what is the purpose of invoking
md_check_recovery() again?

Sounds like you think raid1_reshape() can only be called from
md_check_recovery(), please take a look at other callers.
Thank you for the quick reply and patience.

Of course, I have checked all the caller of md_personality::check_reshape.

- layout_store
- action_store
- chunk_size_store
- md_ioctl
	__md_set_array_info
		update_array_info
- md_check_recovery
- md_ioctl
	__md_set_array_info
		update_array_info
			update_raid_disks
- process_metadata_update
	md_reload_sb
		check_sb_changes
			update_raid_disks
- raid_disks_store
	update_raid_disks

There are three categories of callers except md_check_recovery().
1. write to sysfs
2. ioctl
3. revice instructions from md cluster peer

Using "echo 4 > /sys/devices/virtual/block/md10/md/raid_disks" as an
example, if the mddev::raid_disks is already 4, I don't think
raid1_reshape() should set MD_RECOVERY_RECOVER and MD_RECOVERY_NEEDED
bit, then wake up the mddev::thread to call md_check_recovery().

Is there any counterexample to demonstrate the issues that may arise if
md_check_recovery() is not called out of raid1_reshape() ?


And even if raid1_reshape() is called from md_check_recovery(),
which means reshape is interupted, then MD_RECOVERY_RECOVER
and MD_RECOVERY_RECOVER need to be set, so that reshape can
continue.

raid1 only register md_personality::check_reshape, all the work related
with reshape are handle in md_personality::check_reshape.
What's the meaning of "reshape can continue" ? In another word, what's
the additional work need to do if mddev::raid_disks doesn't change ?

Okay, I missed that raid1 doesn't have 'start_reshape', reshape here
really means recovery, synchronize data to new disks. Never mind what
I said "reshape can continue", it's not possible for raid1.

Then the problem is the same from 'recovery' point of view:
if the 'recovery' is interrupted, before this patch, even if raid_disks
is the same, raid1_reshape() will still set the flag and then
md_check_recovery() will try to continue the recovery. I'd like to
keep this behaviour untill it can be sure that no user will be
affected.

But md_check_recovery() will never call raid1_reshape().

md_personality::check_reshape() is called in md_check_recovery() when
the reshape is in process. But, raid1 is speicial as
mddev::reshape_position always equals to MaxSector in raid1.

By the way, the concern in V2 patch[1] is unnecessary out of the same
reason.


Well... I just said reshape can continue is not possible for raid1, and
this patch will cause that recovery can't continue is some cases.

Thanks,
Kuai

[1]: https://lore.kernel.org/linux-raid/ff93bc7a-5ae2-7a85-91c9-9662d3c5a442@xxxxxxxxxxxxxxx/#t


Thanks,
Kuai


Thanks,
Hu


Thanks,
Kuai

Thanks,
Kuai

.


.


.





[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux