This is a note to let you know that I've just added the patch titled smb3: update allocation size more accurately on write completion to the 6.6-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: smb3-update-allocation-size-more-accurately-on-write.patch and it can be found in the queue-6.6 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit fc5d8dd520420aa09d24b69ad279a23ffe0a34c0 Author: Steve French <stfrench@xxxxxxxxxxxxx> Date: Thu Feb 22 00:26:52 2024 -0600 smb3: update allocation size more accurately on write completion [ Upstream commit dbfdff402d89854126658376cbcb08363194d3cd ] Changes to allocation size are approximated for extending writes of cached files until the server returns the actual value (on SMB3 close or query info for example), but it was setting the estimated value for number of blocks to larger than the file size even if the file is likely sparse which breaks various xfstests (e.g. generic/129, 130, 221, 228). When i_size and i_blocks are updated in write completion do not increase allocation size more than what was written (rounded up to 512 bytes). Signed-off-by: Steve French <stfrench@xxxxxxxxxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index 6d44991e1ccdc..751ae89cefe36 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -3204,8 +3204,15 @@ static int cifs_write_end(struct file *file, struct address_space *mapping, if (rc > 0) { spin_lock(&inode->i_lock); if (pos > inode->i_size) { + loff_t additional_blocks = (512 - 1 + copied) >> 9; + i_size_write(inode, pos); - inode->i_blocks = (512 - 1 + pos) >> 9; + /* + * Estimate new allocation size based on the amount written. + * This will be updated from server on close (and on queryinfo) + */ + inode->i_blocks = min_t(blkcnt_t, (512 - 1 + pos) >> 9, + inode->i_blocks + additional_blocks); } spin_unlock(&inode->i_lock); }