Re: fanotify_merge improvements

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > > Hum, now thinking about this, maybe we could clean this up even a bit more.
> > > event->inode is currently used only by inotify and fanotify for merging
> > > purposes. Now inotify could use its 'wd' instead of inode with exactly the
> > > same results, fanotify path or fid check is at least as strong as the inode
> > > check. So only for the case of pure "inode" events, we need to store inode
> > > identifier in struct fanotify_event - and we can do that in the union with
> > > struct path and completely remove the 'inode' member from fsnotify_event.
> > > Am I missing something?
> >
> > That generally sounds good and I did notice it is strange that wd is not
> > being compared.  However, I think I was worried that comparing fid+name
> > (in following patches) would be more expensive than comparing dentry (or
> > object inode) as a "rule out first" in merge, so I preferred to keep the
> > tag/dentry/id comparison for fanotify_fid case.
>
> Yes, that could be a concern.
>
> > Given this analysis (and assuming it is correct), would you like me to
> > just go a head with the change suggested above? or anything beyond that?
>
> Let's go just with the change suggested above for now. We can work on this
> later (probably with optimizing of the fanotify merging code).
>

Hi Jan,

Recap:
- fanotify_merge is very inefficient and uses extensive CPU if queue contains
  many events, so it is rather easy for a poorly written listener to
cripple the system
- You had an idea to store in event->objectid a hash of all the compared
  fields (e.g. fid+name)
- I think you had an idea to keep a hash table of events in the queue
to find the
  merge candidates faster
- For internal uses, I carry a patch that limits the linear search for
last 128 events
  which is enough to relieve the CPU overuse in case of unattended long queues

I tried looking into implementing the hash table idea, assuming I understood you
correctly and I struggled to choose appropriate table sizes. It seemed to make
sense to use a global hash table, such as inode/dentry cache for all the groups
but that would add complexity to locking rules of queue/dequeue and
group cleanup.

A simpler solution I considered, similar to my 128 events limit patch,
is to limit
the linear search to events queued in the last X seconds.
The rationale is that event merging is not supposed to be long term at all.
If a listener fails to perform read from the queue, it is not fsnotify's job to
try and keep the queue compact. I think merging events mechanism was
mainly meant to merge short bursts of events on objects, which are quite
common and surely can happen concurrently on several objects.

My intuition is that making event->objectid into event->hash in addition
to limiting the age of events to merge would address the real life workloads.
One question if we do choose this approach is what should the age limit be?
Should it be configurable? Default to infinity and let distro cap the age or
provide a sane default by kernel while slightly changing behavior (yes please).

What are your thoughts about this?
Do you have a better idea maybe?

Thanks,
Amir.



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux