Re: git regression failures with v6.2-rc NFS client

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 4 Feb 2023, at 6:07, Thorsten Leemhuis wrote:

> But as you said: people are more likely to run into this problem now.
> This in the end makes the kernel worse and thus afaics is a regression,
> as Hugh mentioned.
>
> There sadly is no quote from Linus in
> https://docs.kernel.org/process/handling-regressions.html
> that exactly matches and helps in this scenario, but a few that come
> close; one of them:
>
> ```
> Because the only thing that matters IS THE USER.
>
> How hard is that to understand?
>
> Anybody who uses "but it was buggy" as an argument is entirely missing
> the point. As far as the USER was concerned, it wasn't buggy - it
> worked for him/her.
> ```
>
> Anyway, I guess we get close to the point where I simply explicitly
> mention the issue in my weekly regression report, then Linus can speak
> up himself if he wants. No hard feeling here, I think that's just my duty.
>
> BTW, I CCed the regression list, as it should be in the loop for
> regressions per
> https://docs.kernel.org/admin-guide/reporting-regressions.html]
>
> BTW, Benjamin, you earlier in this thread mentioned:
>
> ```
> Thorsten's bot is just scraping your regression report email, I doubt
> they've carefully read this thread.
> ```
>
> Well, kinda. It's just not the bot that adds the regression to the
> tracking, that's me doing it. But yes, I only skim threads and sometimes
> simply when adding lack knowledge or details to decide if something
> really is a regression or not. But often that sooner or later becomes
> clear -- and then I'll remove an issue from the tracking, if it turns
> out it isn't a regression.
>
> Ciao, Thorsten (wearing his 'the Linux kernel's regression tracker' hat)

Ah, thanks for explaining that.

I'd like to summarize and quantify this problem one last time for folks that
don't want to read everything.  If an application wants to remove all files
and the parent directory, and uses this pattern to do it:

opendir
while (getdents)
    unlink dents
closedir
rmdir

Before this commit, that would work with up to 126 dentries on NFS from
tmpfs export.  If the directory had 127 or more, the rmdir would fail with
ENOTEMPTY.

After this commit, it only works with up to 17 dentries.

The argument that this is making things worse takes the position that there
are more directories in the universe with >17 dentries that want to be
cleaned up by this "saw off the branch you're sitting on" pattern than
directories with >127.  And I guess that's true if Chuck runs that testing
setup enough.  :)

We can change the optimization in the commit from
NFS_READDIR_CACHE_MISS_THRESHOLD + 1
to
nfs_readdir_array_maxentries + 1

This would make the regression disappear, and would also keep most of the
optimization.

Ben




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux