Re: git regression failures with v6.2-rc NFS client

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On Feb 3, 2023, at 19:15, Hugh Dickins <hughd@xxxxxxxxxx> wrote:
> 
> On Sat, 4 Feb 2023, Trond Myklebust wrote:
>>> On Feb 3, 2023, at 18:53, Hugh Dickins <hughd@xxxxxxxxxx> wrote:
>>> On Fri, 3 Feb 2023, Chuck Lever III wrote:
>>>>> On Feb 3, 2023, at 5:26 PM, Trond Myklebust <trondmy@xxxxxxxxxxxxxxx> wrote:
>>>>> The bottom line is that you’ve always been playing the lottery when mounting tmpfs over NFS.
>>>> 
>>>> I'm not debating the truth of that. I just don't think we should
>>>> be making that situation needlessly worse.
>>>> 
>>>> And I would be much more comfortable with this if it appeared in
>>>> a man page or on our wiki, or ... I'm sorry, but "some email in
>>>> 2001" is not documentation a user should be expected to find.
>>> 
>>> I very much agree with you, Chuck.  Making something imperfect
>>> significantly worse is called "a regression".
>>> 
>>> And I would expect the (laudable) optimization which introduced
>>> that regression to be reverted from 6.2 for now, unless (I imagine
>>> not, but have no clue) it can be easily conditionalized somehow on
>>> not-tmpfs or not-simple_dir_operations.  But that's not my call.
>>> 
>>> What is the likelihood that simple_dir_operations will be enhanced,
>>> or a satisfactory complicated_dir_operations added?  I can assure
>>> you, never by me!  If Al or Amir or some dcache-savvy FS folk have
>>> time on their hands and an urge to add what's wanted, great: but
>>> that surely will not come in 6.2, if ever.
>>> 
>>> More likely that effort would have to come from the NFS(D) end,
>>> who will see the benefit.  And if there's some little tweak to be
>>> made to simple_dir_operations, which will give you the hint you need
>>> to handle it better, I expect fsdevel would welcome a patch or two.
>>> 
>>> Hugh
>> 
>> 
>> No! If it was impossible to hit this problem before the patch, then I might agree with you. However what it does is exposes a problem that has always existed, but was a lot less likely to happen timing wise when we were allowing glibc to suck in all 50000 or so directory entries in one gulp.
>> 
>> IOW: this patch doesn’t cause the problem, it just makes it easier to hit when you are using a high performance setup like Chuck's. It was always easy to hit when you were using slower networking and/or smaller rsize values against a remote server with multiple clients creating + deleting files in the same NFS exported tmpfs directory.
>> _________________________________
>> Trond Myklebust
>> Linux NFS client maintainer, Hammerspace
>> trond.myklebust@xxxxxxxxxxxxxxx
> 
> I can only repeat,
> making something imperfect significantly worse is called "a regression".
> 
> Hugh

<resending due to mailing list rejection>

It is exposing a problem which was always there. You can’t just pick and choose your tests and claim that one result is more significant than the other: that’s called bias.

The functional change here is simply to cut the very first readdir short: i.e. it returns fewer entries than previously. Reducing the readdir buffer size supplied by the userspace application would have the exact same effect. The fact that Chuck’s test happened to work before the patch went in, is due to a systematic bias in that test. Change the systematic bias (by changing the buffer size supplied by glibc in the sys_getdents call) and you should see the same problem that this patch exposed.


_________________________________
Trond Myklebust
Linux NFS client maintainer, Hammerspace
trond.myklebust@xxxxxxxxxxxxxxx





[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux