Re: livelock during MD device open

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 14 Jan 2014 18:14:46 +0100 Nicolas Schichan <nschichan@xxxxxxxxxx>
wrote:

> 
> 
> Hi,
> 
> I have recently been trying to find the cause a livelock occurring during MD 
> device open.
> 
> The livelock happens when a process tries to open an MD device for the
> first time and another opens the same MD device and sends an invalid
> ioctl:
> 
> Process 1				Process 2
> ---------				---------
> 
> md_alloc()
>    mddev_find()
>    -> returns a new mddev with
>       hold_active == UNTIL_IOCTL
>    add_disk()
>    -> sends KOBJ_ADD uevent
> 
> 					(sees KOBJ_ADD uevent for device)
> 					md_open()
> 					md_ioctl(INVALID_IOCTL)
> 					-> returns ENODEV and clears
> 					   mddev->hold_active
> 					md_release()
> 					  md_put()
> 					  -> deletes the mddev as
> 					     hold_active is 0
> 
> md_open()
>    mddev_find()
>    -> returns a newly
>      allocated mddev with
>      mddev->gendisk == NULL
> -> returns with ERESTARTSYS
>     (kernel restarts the open syscall)
> 
> 
> As to how to fix this, I see two possibilities:
> 
> - don't set hold_active to 0 if err is -ENODEV in the abort_unlock
>    path in md_ioctl().
> 
> - check cmd parameter early in md_ioctl() and return -ENOTTY if the
>    cmd parameter is not a valid MD ioctl.
> 
> Please advise on the preferred way to fix this, I'll be glad to send a
> patch for whatever is the preferred solution.
> 
> I have also a simple C program that I can send should you want to reproduce 
> the issue.
> 
> Regards,
> 

That's a very small race you are consistently losing - if I understand
correctly.

In __blkdev_get:

 restart:

	ret = -ENXIO;
	disk = get_gendisk(bdev->bd_dev, &partno);
	if (!disk)
		goto out;
	owner = disk->fops->owner;

	disk_block_events(disk);
	mutex_lock_nested(&bdev->bd_mutex, for_part);


The "get_gendisk" calls into md_alloc (via md_probe) and then add_disk(),
which generates a uevent which is handled by udev.
And before the above code gets to the mutex_lock_nexted(), the process run by
udev must have opened the device (executing all that code above and more) and
issued the ioctl.

I guess it is possible, but happening every time to produce a live-lock
suggests that the scheduler must be encouraging that behaviour.  Presumably
this is a virtual machine with just one CPU ??

I suppose the best fix is, as you suggest, to avoid clearing hold_active for
invalid ioctls.  It feels a bit like papering over a bug, but I think the
only way to really fix it is to add extra locking to the above code sequence
and I don't want to do that.

Of your two suggestions I much prefer the second.  It will be more code, but
it is also more obviously correct.  The current code is rather messy with
respect to invalid ioctl commands.

I would be happy to accept a patch which aborted md_ioctl if the cmd wasn't
one of those known to md.

Thanks,
NeilBrown


Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux