On Mon, May 29, 2023 at 6:14 AM Li Nan <linan666@xxxxxxxxxxxxxxx> wrote: > > > > 在 2023/5/29 21:00, Yu Kuai 写道: > > Hi, > > > > 在 2023/05/27 17:20, linan666@xxxxxxxxxxxxxxx 写道: > >> From: Li Nan <linan122@xxxxxxxxxx> > >> > >> When add a new disk to raid10, it will traverse conf->mirror from start > >> and find one of the following mirror to add: > >> 1. mirror->rdev is set to WantReplacement and it have no replacement, > >> set new disk to mirror->replacement. > >> 2. no mirror->rdev, set new disk to mirror->rdev. > >> > >> There is a array as below (sda is set to WantReplacement): > >> > >> Number Major Minor RaidDevice State > >> 0 8 0 0 active sync set-A /dev/sda > >> - 0 0 1 removed > >> 2 8 32 2 active sync set-A /dev/sdc > >> 3 8 48 3 active sync set-B /dev/sdd > >> > >> Use 'mdadm --add' to add a new disk to this array, the new disk will > >> become sda's replacement instead of add to removed position, which is > >> confusing for users. Meanwhile, after new disk recovery success, sda > >> will be set to Faulty. > >> > >> Prioritize adding disk to 'removed' mirror is a better choice. In the > >> above scenario, the behavior is the same as before, except sda will not > >> be deleted. Before other disks are added, continued use sda is more > >> reliable. > >> > > > > I think this change make sense, however, it's better to do this for all > > personality instead of just for raid10. > > > > Thanks, > > Kuai > > raid5 is currently like this. If others are OK with this changes to > raid10, I will modify raid1 later. This change looks reasonable. Could you please add a mdadm test to cover this case? Applied to md-next. Thanks, Song