Re: [PATCH RFC] vfs: Introduce a new open flag to imply dentry deletion on file removal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 12, 2024 at 12:53:40PM +0200, Jan Kara wrote:
> On Thu 12-09-24 17:15:48, Yafang Shao wrote:
> > This patch seeks to reintroduce the concept conditionally, where the
> > associated dentry is deleted only when the user explicitly opts for it
> > during file removal.
> > 
>
> Umm, I don't think we want to burn a FMODE flag and expose these details of
> dentry reclaim to userspace. BTW, if we wanted to do this, we already have
> d_mark_dontcache() for in-kernel users which we could presumably reuse.
> 

I don't believe any mechanism letting userspace hint at what to do with
a dentry is warranted at this point.

> But I would not completely give up on trying to handle this in an automated
> way inside the kernel. The original solution you mention above was perhaps
> too aggressive but maybe d_delete() could just mark the dentry with a
> "deleted" flag, retain_dentry() would move such dentries into a special LRU
> list which we'd prune once in a while (say once per 5 s). Also this list
> would be automatically pruned from prune_dcache_sb(). This way if there's
> quick reuse of a dentry, it will get reused and no harm is done, if it
> isn't quickly reused, we'll free them to not waste memory.
> 
> What do people think about such scheme?
>

I have to note what to do with a dentry after unlink is merely a subset
of the general problem of what to do about negative entries.  I had a
look at it $elsewhere some years back and as one might suspect userspace
likes to do counterproductive shit. For example it is going to stat a
non-existent path 2-3 times and then open(..., O_CREAT) on it.

I don't have numbers handy and someone(tm) will need to re-evaluate, but
crux of the findings was as follows:
- there is a small subset of negative entries which keep getting tons of
  hits
- a sizeable count literally does not get any hits after being created
  (aka wastes memory)
- some negative entries get 2-3 hits and get converted into a positive
  entry afterwards (see that stat shitter)
- some flip flop with deletion/creation

So whatever magic mechanism, if it wants to mostly not get in the way in
terms of performance, will have to account for the above.

I ended up with a kludge where negative entries hang out on some number
of LRU lists and get promoted to a hot list if they manage to get some
number of hits. The hot list is merely a FIFO and entries there no
longer count any hits. Removal from the cold LRU also demotes an entry
from the hot list.

The total count is limited and if you want to create a negative dentry
you have to whack one from the LRU.

This is not perfect by any means but manages to succesfully separate the
high churn entries from the one which are likely to stay in the long
run. Definitely something to tinker with.

If I read the original problem correctly this would be sorted out as a
side effect by limiting how many entries are there to evict to begin
with.

I'm not signing up to do squat though. :)




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux