On 19.11.2012 11:27, Sebastian Riemer wrote: > On 17.11.2012 01:07, Dan Williams wrote: >> Hmm, how are you getting ->raid_disk set to -1 without the personality >> being notified? That seems to be the source of the bug. Otherwise it >> would seem the state_store("remove") path is also susceptible. >> >> -- >> Dan > Thanks for your feedback! > > Perhaps it's only something for my use-case of read-only > raid1-replicated volumes where I never have any spares. I don't set > ->raid_disk to -1 when hot adding them. I have to put them directly to > their stored slot because I assemble read-only on read-only rdevs. So > the syncer never runs. Those read-only rdevs are coming from remote > storage. Imagine HA CD-ROM images. > > My thought was that it would be cleaner behavior if the cleanup of an > rdev in the personality already happens when hot removing the disk. > > You're right, the state_store("remove") path would need this change, too. > > With the "slot_store" path I could trigger a panic when I hot added the > disk as spare, marked it insync and tried to give it the slot it belongs > to. I'll test this with the latest vanilla kernel and try to reproduce > this on rw volumes which I set to read-only. If I can also crash it that > way, I've got a proof that there is a bug. The kernel blocks most ioctls on a read-only mddev. So I guess I've got another custom patch. IMHO it would be still cleaner to also hot remove the disk from the personality when hot removing an rdev. > If the disk isn't cleaned up in the pers, then hot_add_disk doesn't > return any error for raid1. Instead it crashes when referencing the old > pointer when calling "print_conf" in raid1.c. -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html