On 3/17/2021 2:10 AM, Steve French wrote:
Was examining why xfstest 614 failed (smb3.1.1 mount to Windows), and noticed a couple of problems: 1) we don't requery the allocation size (number of blocks) in all cases we should. This small fix should address that --- a/fs/cifs/inode.c +++ b/fs/cifs/inode.c @@ -2390,15 +2390,16 @@ int cifs_getattr(struct user_namespace *mnt_userns, const struct path *path, struct cifs_tcon *tcon = cifs_sb_master_tcon(cifs_sb); struct inode *inode = d_inode(dentry); int rc; /* * We need to be sure that all dirty pages are written and the server * has actual ctime, mtime and file length. */ - if ((request_mask & (STATX_CTIME | STATX_MTIME | STATX_SIZE)) && + if ((request_mask & (STATX_CTIME | STATX_MTIME | STATX_SIZE | STATX_BLOCKS)) &&
Seems obviously enough correct.
and 2) we don't set the allocation size on the case where a cached file on this client is written, and to make it worse if we queried
Also obviously the cache needs to be kept in sync, but is it accurate to set the allocation size before writing? That's the server's field, so shouldn't it be written, then queried?
(post query attributes flag) at SMB3 close for compounded operations - the Windows server (not sure about others) apparently doesn't update the allocation size until the next open/queryinfo so we still end up with an allocation size of 0 for a 64K file which breaks the test. What the test is doing is quite simple: xfs_io -f -c "truncate 64K" -c "mmap -w 0 64K" -c "mwrite -S 0xab 0 64K" -c "munmap" foo1 ; stat -c %b foo1 And it fails - due to seeing a number of blocks 0 rather than the correct value (128). With actimeo=0 we do a subsequent open/query operation which would cause it to pass since the second open/query does show the correct allocation size. Any ideas?
What actually goes on the wire diring the test? It looks like the munmap step should be msync'ing - does cifs.ko not write the data? Tom.