Re: [PATCH 17/20] aio: support for IO polling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 2018-11-27 at 08:24 -0700, Jens Axboe wrote:
> On 11/27/18 2:53 AM, Benny Halevy wrote:
> > > @@ -818,11 +853,15 @@ static int kill_ioctx(struct mm_struct *mm, struct kioctx *ctx,
> > >  {
> > >  	struct kioctx_table *table;
> > >  
> > > +	mutex_lock(&ctx->getevents_lock);
> > >  	spin_lock(&mm->ioctx_lock);
> > >  	if (atomic_xchg(&ctx->dead, 1)) {
> > >  		spin_unlock(&mm->ioctx_lock);
> > > +		mutex_unlock(&ctx->getevents_lock);
> > >  		return -EINVAL;
> > >  	}
> > > +	aio_iopoll_reap_events(ctx);
> > > +	mutex_unlock(&ctx->getevents_lock);
> > 
> > Is it worth handling the mutex lock and calling aio_iopoll_reap_events
> > only if (ctx->flags & IOCTX_FLAG_IOPOLL)?  If so, testing it can be
> > removed from aio_iopoll_reap_events() (and maybe it could even
> > be open coded
> > here since this is its only call site apparently)
> 
> I don't think it really matters, this only happens when you tear down an
> io_context. FWIW, I think it's cleaner to retain the test in the
> function, not outside it.
> 
> > > @@ -1072,6 +1112,15 @@ static inline void iocb_put(struct aio_kiocb *iocb)
> > >  	}
> > >  }
> > >  
> > > +static void iocb_put_many(struct kioctx *ctx, void **iocbs, int *nr)
> > > +{
> > > +	if (nr) {
> > 
> > How can nr by NULL?
> > And what's the point of supporting this case?
> > Did you mean: if (*nr)?
> > (In this case, if safe to call the functions below with *nr==0,
> > I'm not sure it's worth optimizing... especially since this is a static
> > function and its callers make sure to call it only when *nr > 0)
> 
> Indeed, that should be if (*nr), thanks! The slub implementation of the
> bulk free complains if you pass in nr == 0. Outside of that, a single
> check should be better than checking in multiple spots.
> 

Cool. The compiler might also optimize it away when inlining this function
if the caller tests *nr for being non-zero too.

> > > @@ -1261,6 +1310,166 @@ static bool aio_read_events(struct kioctx *ctx, long min_nr, long nr,
> > >  	return ret < 0 || *i >= min_nr;
> > >  }
> > >  
> > > +#define AIO_IOPOLL_BATCH	8
> > > +
> > > +/*
> > > + * Process completed iocb iopoll entries, copying the result to userspace.
> > > + */
> > > +static long aio_iopoll_reap(struct kioctx *ctx, struct io_event __user *evs,
> > > +			    unsigned int *nr_events, long max)
> > > +{
> > > +	void *iocbs[AIO_IOPOLL_BATCH];
> > > +	struct aio_kiocb *iocb, *n;
> > > +	int to_free = 0, ret = 0;
> > > +
> > > +	list_for_each_entry_safe(iocb, n, &ctx->poll_completing, ki_list) {
> > > +		if (*nr_events == max)
> > 
> > *nr_events >= max would be safer.
> 
> I don't see how we can get there with it being larger than already, that
> would be a big bug if we fill more events than userspace asked for.
> 

Currently we indeed can't, but if the code changes in the future
and we do, this will reduce the damage - hence being safer (and it costs nothing
in terms of performance).

> > > @@ -1570,17 +1829,43 @@ static inline void aio_rw_done(struct kiocb *req, ssize_t ret)
> > >  	default:
> > >  		req->ki_complete(req, ret, 0);
> > >  	}
> > > +
> > 
> > nit: this hunk is probably unintentional
> 
> Looks like it, I'll kill it.
> 
> 




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux