Hi Chris, On Tue, Dec 23, 2014 at 01:55:26PM -0500, Chris Mason wrote: > Works for me, the patch is mostly a (somewhat commented) list of all > the places we're currently doing it wrong. I think the following change should suffice to fix this issue, and it's a lot easier to review, too. I've given this a quick test, and it works for me. I do have one concern: is it safe to call mutex_lock() when the current task is already on other wait queues? If the answer is no, then it may be necessary to convert ->ring_lock back into spinlock as it was prior to 3.10 to avoid using mutex_lock(). The same question applies to kmap() and copy_to_user(), and those concerns might have implications across the rest of the kernel. Thoughts? -ben -- "Thought is the essence of where you are now." diff --git a/fs/aio.c b/fs/aio.c index 1b7893e..a25bd96 100644 --- a/fs/aio.c +++ b/fs/aio.c @@ -1140,7 +1140,14 @@ static long aio_read_events_ring(struct kioctx *ctx, long ret = 0; int copy_ret; - mutex_lock(&ctx->ring_lock); + /* Avoid clobbering current->state during mutex_lock(). */ + if (!mutex_trylock(&ctx->ring_lock)) { + long state = current->state; + + __set_current_state(TASK_RUNNING); + mutex_lock(&ctx->ring_lock); + __set_current_state(state); + } /* Access to ->ring_pages here is protected by ctx->ring_lock. */ ring = kmap_atomic(ctx->ring_pages[0]); @@ -1179,6 +1186,7 @@ static long aio_read_events_ring(struct kioctx *ctx, page = ctx->ring_pages[pos / AIO_EVENTS_PER_PAGE]; pos %= AIO_EVENTS_PER_PAGE; + __set_current_state(TASK_RUNNING); ev = kmap(page); copy_ret = copy_to_user(event + ret, ev + pos, sizeof(*ev) * avail); -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html