On Fri, May 11, 2018 at 10:46:59AM +0800, Xiao Ni wrote: > Hi Shaohua > > > On 05/11/2018 06:23 AM, Shaohua Li wrote: > > From: Shaohua Li <shli@xxxxxx> > > > > The recent flush request handling seems unncessary complicated. The main > > issue is in rdev_end_flush we can either get rdev of the bio or the > > flush_info, not both, or we need extra memory to for the other. With the > > extra memory, we need reallocate the memory in disk hotadd/remove. > > Actually the original patch forgets one case of add_new_disk for memory > > allocation, and we have kernel crash. > > add_new_disk just adds disk to md as a spare disk. After reshape raid > disks update_raid_disks realloc memory. Why is there a kernel crash? > Could you explain more? Not always reshape. It's very easy to reproduce. Just create a linear array, grow up one disk, and run some file io triggering flush. > > > > The idea is always to increase all rdev reference in md_flush_request > > and decrease the references after bio finish. In this way, > > rdev_end_flush doesn't need to know rdev, so we don't need to allocate > > extra memory. > > Is there a situation like this? It plugs one disk to the raid after the > flush > bios submitted to underlayer disks. After those flush bios come back there > is one more rdev in the list mddev->disks. If decrements all rdev reference > at one time, it can decrease the rdev reference which doesn't submit flush > bio. It's the reason I try to allocate memory for bios[0] in flush_info. I think we wait all IO finish before the hot add/remove disks. Thanks, Shaohua -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html