On Wed, Jun 25, 2014 at 04:50:39PM -0400, Jeff Moyer wrote: > > From: Benjamin LaHaise <bcrl@xxxxxxxxx> > > The aio cleanups and optimizations by kmo that were merged into the 3.10 > tree added a regression for userspace event reaping. Specifically, the > reference counts are not decremented if the event is reaped in userspace, > leading to the application being unable to submit further aio requests. > This issue was uncovered as part of CVE-2014-0206. > > [jmoyer@xxxxxxxxxx: backported to 3.10] Thank you Jeff, I'll queue this backport for the 3.11 kernel as well. Cheers, -- Luís > Signed-off-by: Benjamin LaHaise <bcrl@xxxxxxxxx> > Signed-off-by: Jeff Moyer <jmoyer@xxxxxxxxxx> > Cc: stable@xxxxxxxxxxxxxxx > Cc: Kent Overstreet <kmo@xxxxxxxxxxxxx> > Cc: Mateusz Guzik <mguzik@xxxxxxxxxx> > Cc: Petr Matousek <pmatouse@xxxxxxxxxx> > --- > aio.c | 4 +--- > 1 file changed, 1 insertion(+), 3 deletions(-) > > diff --git a/fs/aio.c b/fs/aio.c > index ebd06fd..8d2c997 100644 > --- a/fs/aio.c > +++ b/fs/aio.c > @@ -310,7 +310,6 @@ static void free_ioctx(struct kioctx *ctx) > > avail = (head <= ctx->tail ? ctx->tail : ctx->nr_events) - head; > > - atomic_sub(avail, &ctx->reqs_active); > head += avail; > head %= ctx->nr_events; > } > @@ -678,6 +677,7 @@ void aio_complete(struct kiocb *iocb, long res, long res2) > put_rq: > /* everything turned out well, dispose of the aiocb. */ > aio_put_req(iocb); > + atomic_dec(&ctx->reqs_active); > > /* > * We have to order our ring_info tail store above and test > @@ -755,8 +755,6 @@ static long aio_read_events_ring(struct kioctx *ctx, > flush_dcache_page(ctx->ring_pages[0]); > > pr_debug("%li h%u t%u\n", ret, head, ctx->tail); > - > - atomic_sub(ret, &ctx->reqs_active); > out: > mutex_unlock(&ctx->ring_lock); > > -- > To unsubscribe from this list: send the line "unsubscribe stable" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html