Re: [RFC] [PATCH v2 1/1]: nfsd: Change caching from ino/dev to file-handle

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jeff,

Thanks for your review comments.

> It would be nice to break up this patchset some. It's very hard to
> review large-scale changes like this, and this is quite frankly not
> code that is often touched by human hands. Being able to do a git
> bisect if we run across bugs after these changes would be a very nice
> thing. It won't be helpful though unless the changes are in smaller
> pieces.

OK, I will try to break this up and send it again.

> In your earlier discussion with Bruce, you mentioned trying to
> determine when to flush the cache. When the exports table is changed
> via exportfs, the exports kernel cache is also flushed. Hooking into
> that might be the best thing...

Thanks for this suggestion - I will look into how to do this.

> I'd also go ahead and get rid of the ra_ prefixes unless you feel
> they're needed. It'd be best to clean this up so that we don't have to
> muck around in here later.

OK.

> > +/* Number of jiffies to cache the file before releasing */
> > +#define NFSD_CACHE_JIFFIES      100
>
> It'd probably be better to express this in terms of HZ so that this is
> a fixed amount of time regardless of how the kernel is compiled.

Definitely it should have been in terms of HZ.

> > +/* List of FH cache entries that has to be cleaned up when they expire
*/
> > +static struct list_head      nfsd_daemon_list;
> > +
>
> ^^^ some more descriptive variable names would be welcome...

OK.

> ...I can't say I'm thrilled about adding a kthread for this but don't
> have any specific objections. I wonder if it might be better to just
> periodically schedule (and reschedule) delayed work to the events queue
> whenever a cache entry is touched?

I was originally using queue_delayed_work but found the performance was
better when using a daemon (in that context, I had also submitted a patch
to lkml to implement a new API - queue_update_delayed_work - see:
http://lwn.net/Articles/300919/). I think the problem was due to heavy
contention with kernel timer locks, especially if there are a lot of
parallel reads at the same time (contention for the same timer "base" lock
with regular kernel timers used in other subsystems).

But I will run a test with delayed work and compare with a daemon and
report
what difference I find.

Bruce had also suggested doing a profile on new vs old kernel. Can someone
tell me what to run exactly (I got the oprofile compiled in, what user
commands
should I run and what should I look for/report)?

I would like to resubmit after getting Bruce's comments addressed, your
suggestion about workqueue vs daemon and the other changes suggested.

Thanks,

- KK

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux