Re: corruption causing crash in __queue_work

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, Nikolay.

On Thu, Dec 17, 2015 at 12:46:10PM +0200, Nikolay Borisov wrote:
> diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
> index 493c38e08bd2..ccbbf7823cf3 100644
> --- a/drivers/md/dm-thin.c
> +++ b/drivers/md/dm-thin.c
> @@ -3506,8 +3506,8 @@ static void pool_postsuspend(struct dm_target *ti)
>         struct pool_c *pt = ti->private;
>         struct pool *pool = pt->pool;
> 
> -       cancel_delayed_work(&pool->waker);
> -       cancel_delayed_work(&pool->no_space_timeout);
> +       cancel_delayed_work_sync(&pool->waker);
> +       cancel_delayed_work_sync(&pool->no_space_timeout);
>         flush_workqueue(pool->wq);
>         (void) commit(pool);
>  }
> 
> And this seems to have resolved the crashes. For the past 24 hours I
> haven't seen a single server crash whereas before at least 3-5 servers
> would crash.

So, that's an obvious bug on dm-thin side.

> Given that, it seems like a race condition between destroying the
> workqueue from dm-thin and cancelling all the delayed work.
> 
> Tejun, I've looked at cancel_delayed_work/cancel_delayed_work_sync and
> they both call try_to_grab_pending and then their function diverges. Is
> it possible that there is a latent race condition between canceling the
> delayed work and the subsequent re-scheduling of the work item?

It's just the wrong variant being used.  cancel_delayed_work() doesn't
guarantee that the work item isn't running on return.  If the work
item was running and the workqueue is destroyed afterwards, it may end
up trying to requeue itself on a destroyed workqueue.

Thanks.

-- 
tejun

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel



[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux