Hi Greg, On Sun, Jul 11, 2021 at 02:21:24PM +0200, gregkh@xxxxxxxxxxxxxxxxxxx wrote: > > The patch below does not apply to the 5.4-stable tree. > If someone wants it applied there, or to any other stable or longterm > tree, then please email the backport, including the original git commit > id to <stable@xxxxxxxxxxxxxxx>. Here is the backport, will also apply to all branches till 4.14-stable. -- Regards Sudip
>From 6a9596aacb9cc11149423390ac429f9ab5b30075 Mon Sep 17 00:00:00 2001 From: David Sterba <dsterba@xxxxxxxx> Date: Mon, 14 Jun 2021 12:45:18 +0200 Subject: [PATCH] btrfs: compression: don't try to compress if we don't have enough pages commit f2165627319ffd33a6217275e5690b1ab5c45763 upstream The early check if we should attempt compression does not take into account the number of input pages. It can happen that there's only one page, eg. a tail page after some ranges of the BTRFS_MAX_UNCOMPRESSED have been processed, or an isolated page that won't be converted to an inline extent. The single page would be compressed but a later check would drop it again because the result size must be at least one block shorter than the input. That can never work with just one page. CC: stable@xxxxxxxxxxxxxxx # 4.4+ Signed-off-by: David Sterba <dsterba@xxxxxxxx> [sudip: adjust context] Signed-off-by: Sudip Mukherjee <sudipm.mukherjee@xxxxxxxxx> --- fs/btrfs/inode.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 64dd702a5448..025b02e9799f 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -543,7 +543,7 @@ static noinline int compress_file_range(struct async_chunk *async_chunk) * inode has not been flagged as nocompress. This flag can * change at any time if we discover bad compression ratios. */ - if (inode_need_compress(inode, start, end)) { + if (nr_pages > 1 && inode_need_compress(inode, start, end)) { WARN_ON(pages); pages = kcalloc(nr_pages, sizeof(struct page *), GFP_NOFS); if (!pages) { -- 2.30.2