On Wed, Feb 17, 2021 at 3:48 PM Jan Kara <jack@xxxxxxx> wrote: > > On Wed 17-02-21 14:33:46, Amir Goldstein wrote: > > On Tue, Feb 16, 2021 at 5:02 PM Jan Kara <jack@xxxxxxx> wrote: > > > > @@ -300,10 +301,16 @@ static long inotify_ioctl(struct file *file, unsigned int cmd, > > > > switch (cmd) { > > > > case FIONREAD: > > > > spin_lock(&group->notification_lock); > > > > - list_for_each_entry(fsn_event, &group->notification_list, > > > > - list) { > > > > - send_len += sizeof(struct inotify_event); > > > > - send_len += round_event_name_len(fsn_event); > > > > + list = fsnotify_first_notification_list(group); > > > > + /* > > > > + * With multi queue, send_len will be a lower bound > > > > + * on total events size. > > > > + */ > > > > + if (list) { > > > > + list_for_each_entry(fsn_event, list, list) { > > > > + send_len += sizeof(struct inotify_event); > > > > + send_len += round_event_name_len(fsn_event); > > > > + } > > > > > > As I write below IMO we should enable hashed queues also for inotify (is > > > there good reason not to?) > > > > I see your perception of inotify_merge() is the same as mine was > > when I wrote a patch to support hashed queues for inotify. > > It is only after that I realized that inotify_merge() only ever merges > > with the last event and I dropped that patch. > > I see no reason to change this long time behavior. > > Ah, I even briefly looked at that code but didn't notice it merges only > with the last event. I agree that hashing for inotify doesn't make sense > then. > > Hum, if the hashing and merging is specific to fanotify and as we decided > to keep the event->list for the global event list, we could easily have the > hash table just in fanotify private group data and hash->next pointer in > fanotify private part of the event? Maybe that would even result in a more > compact code? > Maybe, I am not so sure. I will look into it. > > > > +static inline size_t fsnotify_group_size(unsigned int q_hash_bits) > > > > +{ > > > > + return sizeof(struct fsnotify_group) + (sizeof(struct list_head) << q_hash_bits); > > > > +} > > > > + > > > > +static inline unsigned int fsnotify_event_bucket(struct fsnotify_group *group, > > > > + struct fsnotify_event *event) > > > > +{ > > > > + /* High bits are better for hash */ > > > > + return (event->key >> (32 - group->q_hash_bits)) & group->max_bucket; > > > > +} > > > > > > Why not use hash_32() here? IMHO better than just stripping bits... > > > > See hash_ptr(). There is a reason to use the highest bits. > > Well, but event->key is just a 32-bit number so I don't follow how high > bits used by hash_ptr() matter? Of course, you are right. But that 32-bit number was already generated using a xor of several hash_32() results from hash_ptr() and full_name_hash(), so we do not really need to mix it anymore to get better entropy in the higher 7 bits. I do not mind using hash_32() here for clarity. Thanks, Amir.