On Mon, Jun 29, 2020 at 05:05:38PM +0300, Amir Goldstein wrote: > > > The motivation for the patch "fs: Do not check if there is a fsnotify > > > watcher on pseudo inodes" > > > was performance, but actually, FS_CLOSE and FS_OPEN events probably do > > > not impact performance as FS_MODIFY and FS_ACCESS. > > > > Correct. > > > > > Do you agree that dropping FS_MODIFY/FS_ACCESS events for FMODE_STREAM > > > files as a general rule should be safe? > > > > Hum, so your patch drops FS_MODIFY/FS_ACCESS events also for named pipes > > compared to the original patch AFAIU and for those fsnotify works fine > > so far. So I'm not sure we won't regress someone else with this. > > > > I've also tested inotify on a sample pipe like: cat /dev/stdin | tee > > and watched /proc/<tee pid>/fd/0 and it actually generated IN_MODIFY | > > IN_ACCESS when data arrived to a pipe and tee(1) read it and then > > IN_CLOSE_WRITE | IN_CLOSE_NOWRITE when the pipe got closed (I thought you > > mentioned modify and access events didn't get properly generated?). > > I don't think that I did (did I?) > I didn't see them properly generated for fanotify_mark but that could have been a failure. inotify-watch is able to generate the events. > > > > So as much as I agree that some fsnotify events on FMODE_STREAM files are > > dubious, they could get used (possibly accidentally) and so after this > > Chromium experience I think we just have to revert the change and live with > > generating notification events for pipes to avoid userspace regressions. > > > > Thoughts? > > I am fine with that. > > Before I thought of trying out FMODE_STREAM I was considering to propose > to set the new flag FMODE_NOIONOTIFY in alloc_file_pseudo() to narrow Mel's > patch to dropping FS_MODIFY|FS_ACCESS. > > But I guess the burden of proof is back on Mel. > And besides, quoting Mel's patch: > "A patch is pending that reduces, but does not eliminate, the overhead of > fsnotify but for files that cannot be looked up via a path, even that > small overhead is unnecessary" > > So really, we are not even sacrificing much by reverting this patch. > We down to "nano optimizations". > It's too marginal to be worth the risk. A plain revert is safest when multiple people are hitting this. -- Mel Gorman SUSE Labs