Re: [RPC] nfsd: NFSv4 close a file completely

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On Jun 15, 2022, at 11:28 AM, Wang Yugui <wangyugui@xxxxxxxxxxxx> wrote:
> 
> Hi,
> 
>>> On Jun 12, 2022, at 3:22 AM, Wang Yugui <wangyugui@xxxxxxxxxxxx> wrote:
>>> 
>>> NFSv4 need to close a file completely (no lingering open) when it does
>>> a CLOSE or DELEGRETURN.
>>> 
>>> When multiple NFSv4/OPEN from different clients, we need to check the
>>> reference count. The flowing reference-count-check change the behavior
>>> of NFSv3 nfsd_rename()/nfsd_unlink() too.
>>> 
>>> Link: https://bugzilla.linux-nfs.org/show_bug.cgi?id=387
>>> Signed-off-by: Wang Yugui <wangyugui@xxxxxxxxxxxx>
>>> ---
>>> TO-CHECK:
>>> 1) NFSv3 nfsd_rename()/nfsd_unlink() feature change is OK?
>>> 2) Can we do better performance than nfsd_file_close_inode_sync()?
>>> 3) nfsd_file_close_inode_sync()->nfsd_file_close_inode() in nfsd4_delegreturn()
>>> 	=> 'Text file busy' about 4s
>>> 4) reference-count-check : refcount_read(&nf->nf_ref) <= 1 or ==0?
>>> 	nfsd_file_alloc()	refcount_set(&nf->nf_ref, 1);
>>> 
>>> fs/nfsd/filecache.c | 2 +-
>>> fs/nfsd/nfs4state.c | 4 ++++
>>> 2 files changed, 5 insertions(+), 1 deletion(-)
>> 
>> I suppose I owe you (and Frank) a progress report on #386. I've fixed
>> the LRU algorithm and added some observability features to measure
>> how the fix impacts the cache's efficiency for NFSv3 workloads.
>> 
>> These new features show that the hit rate and average age of cache
>> items goes down after the fix is applied. I'm trying to understand
>> if I've done something wrong or if the fix is supposed to do that.
>> 
>> To handle the case of hundreds of thousands of open files more
>> efficiently, I'd like to convert the filecache to use rhashtable.
> 
> A question about the comming rhashtable.
> 
> Now multiple nfsd export share a cache pool.
> 
> In the coming rhashtable, a nfsd export could use a private cache pool
> to improve scale out?

That seems like a premature optimization. We don't know that the hashtable,
under normal (ie, non-generic/531) workloads, is a scaling problem.

However, I am considering (in the future) creating separate filecaches
for each nfsd_net.


--
Chuck Lever







[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux