This is a note to let you know that I've just added the patch titled f2fs: compress: fix deadloop in f2fs_write_cache_pages() to the 6.1-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: f2fs-compress-fix-deadloop-in-f2fs_write_cache_pages.patch and it can be found in the queue-6.1 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit b675e87e610d478ca420732104f178e594996306 Author: Chao Yu <chao@xxxxxxxxxx> Date: Mon Aug 28 22:04:14 2023 +0800 f2fs: compress: fix deadloop in f2fs_write_cache_pages() [ Upstream commit c5d3f9b7649abb20aa5ab3ebff9421a171eaeb22 ] With below mount option and testcase, it hangs kernel. 1. mount -t f2fs -o compress_log_size=5 /dev/vdb /mnt/f2fs 2. touch /mnt/f2fs/file 3. chattr +c /mnt/f2fs/file 4. dd if=/dev/zero of=/mnt/f2fs/file bs=1MB count=1 5. sync 6. dd if=/dev/zero of=/mnt/f2fs/file bs=111 count=11 conv=notrunc 7. sync INFO: task sync:4788 blocked for more than 120 seconds. Not tainted 6.5.0-rc1+ #322 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:sync state:D stack:0 pid:4788 ppid:509 flags:0x00000002 Call Trace: <TASK> __schedule+0x335/0xf80 schedule+0x6f/0xf0 wb_wait_for_completion+0x5e/0x90 sync_inodes_sb+0xd8/0x2a0 sync_inodes_one_sb+0x1d/0x30 iterate_supers+0x99/0xf0 ksys_sync+0x46/0xb0 __do_sys_sync+0x12/0x20 do_syscall_64+0x3f/0x90 entry_SYSCALL_64_after_hwframe+0x6e/0xd8 The reason is f2fs_all_cluster_page_ready() assumes that pages array should cover at least one cluster, otherwise, it will always return false, result in deadloop. By default, pages array size is 16, and it can cover the case cluster_size is equal or less than 16, for the case cluster_size is larger than 16, let's allocate memory of pages array dynamically. Fixes: 4c8ff7095bef ("f2fs: support data compression") Signed-off-by: Chao Yu <chao@xxxxxxxxxx> Signed-off-by: Jaegeuk Kim <jaegeuk@xxxxxxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index f4d3b3c6f6da7..47483634b06a3 100644 --- a/fs/f2fs/data.c +++ b/fs/f2fs/data.c @@ -2950,7 +2950,8 @@ static int f2fs_write_cache_pages(struct address_space *mapping, { int ret = 0; int done = 0, retry = 0; - struct page *pages[F2FS_ONSTACK_PAGES]; + struct page *pages_local[F2FS_ONSTACK_PAGES]; + struct page **pages = pages_local; struct folio_batch fbatch; struct f2fs_sb_info *sbi = F2FS_M_SB(mapping); struct bio *bio = NULL; @@ -2974,6 +2975,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping, #endif int nr_folios, p, idx; int nr_pages; + unsigned int max_pages = F2FS_ONSTACK_PAGES; pgoff_t index; pgoff_t end; /* Inclusive */ pgoff_t done_index; @@ -2983,6 +2985,15 @@ static int f2fs_write_cache_pages(struct address_space *mapping, int submitted = 0; int i; +#ifdef CONFIG_F2FS_FS_COMPRESSION + if (f2fs_compressed_file(inode) && + 1 << cc.log_cluster_size > F2FS_ONSTACK_PAGES) { + pages = f2fs_kzalloc(sbi, sizeof(struct page *) << + cc.log_cluster_size, GFP_NOFS | __GFP_NOFAIL); + max_pages = 1 << cc.log_cluster_size; + } +#endif + folio_batch_init(&fbatch); if (get_dirty_pages(mapping->host) <= @@ -3028,7 +3039,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping, add_more: pages[nr_pages] = folio_page(folio, idx); folio_get(folio); - if (++nr_pages == F2FS_ONSTACK_PAGES) { + if (++nr_pages == max_pages) { index = folio->index + idx + 1; folio_batch_release(&fbatch); goto write; @@ -3214,6 +3225,11 @@ static int f2fs_write_cache_pages(struct address_space *mapping, if (bio) f2fs_submit_merged_ipu_write(sbi, &bio, NULL); +#ifdef CONFIG_F2FS_FS_COMPRESSION + if (pages != pages_local) + kfree(pages); +#endif + return ret; }