On Thu, Jul 13, 2023 at 03:04:22PM +0200, Christoph Hellwig wrote: > Hi all, > > this series has various fixes for bugs found in inspect or only triggered > with upcoming changes that are a fallout from my work on bound lifetimes > for the ordered extent and better confirming to expectations from the > common writeback code. > > Note that this series builds on the "btrfs compressed writeback cleanups" > series sent out previously. > > A git tree is also available here: > > git://git.infradead.org/users/hch/misc.git btrfs-writeback-fixes > > Gitweb: > > http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/btrfs-writeback-fixes > > Diffatat: > extent_io.c | 182 ++++++++++++++++++++++++++++++++++++------------------------ > inode.c | 16 +---- > 2 files changed, 117 insertions(+), 81 deletions(-) Just FYI I've been using these two series to see how the github CI stuff was working, and I keep tripping over a hang in generic/475. It appears to be in the fixup worker, here's the sysrq w output sysrq: Show Blocked State task:kworker/u4:5 state:D stack:0 pid:1713600 ppid:2 flags:0x00004000 Workqueue: btrfs-fixup btrfs_work_helper Call Trace: <TASK> __schedule+0x533/0x1910 ? find_held_lock+0x2b/0x80 schedule+0x5e/0xd0 __reserve_bytes+0x4e2/0x830 ? __pfx_autoremove_wake_function+0x10/0x10 btrfs_reserve_data_bytes+0x54/0x170 btrfs_check_data_free_space+0x6a/0xf0 btrfs_delalloc_reserve_space+0x2b/0xe0 btrfs_writepage_fixup_worker+0x7e/0x4c0 btrfs_work_helper+0xff/0x410 process_one_work+0x26b/0x550 worker_thread+0x53/0x3a0 ? __pfx_worker_thread+0x10/0x10 kthread+0xf5/0x130 ? __pfx_kthread+0x10/0x10 ret_from_fork+0x2c/0x50 </TASK> task:kworker/u4:4 state:D stack:0 pid:2513631 ppid:2 flags:0x00004000 Workqueue: events_unbound btrfs_async_reclaim_data_space Call Trace: <TASK> __schedule+0x533/0x1910 ? lock_acquire+0xca/0x2b0 schedule+0x5e/0xd0 schedule_timeout+0x1ad/0x1c0 __wait_for_common+0xbd/0x220 ? __pfx_schedule_timeout+0x10/0x10 btrfs_wait_ordered_extents+0x3e3/0x480 btrfs_wait_ordered_roots+0x184/0x260 flush_space+0x3de/0x6a0 ? btrfs_async_reclaim_data_space+0x52/0x180 ? lock_release+0xc9/0x270 btrfs_async_reclaim_data_space+0xff/0x180 process_one_work+0x26b/0x550 worker_thread+0x1eb/0x3a0 ? __pfx_worker_thread+0x10/0x10 kthread+0xf5/0x130 ? __pfx_kthread+0x10/0x10 ret_from_fork+0x2c/0x50 </TASK> task:kworker/u4:6 state:D stack:0 pid:2513783 ppid:2 flags:0x00004000 Workqueue: btrfs-flush_delalloc btrfs_work_helper Call Trace: <TASK> __schedule+0x533/0x1910 schedule+0x5e/0xd0 btrfs_start_ordered_extent+0x153/0x210 ? __pfx_autoremove_wake_function+0x10/0x10 btrfs_run_ordered_extent_work+0x19/0x30 btrfs_work_helper+0xff/0x410 process_one_work+0x26b/0x550 worker_thread+0x53/0x3a0 ? __pfx_worker_thread+0x10/0x10 kthread+0xf5/0x130 ? __pfx_kthread+0x10/0x10 ret_from_fork+0x2c/0x50 </TASK> We appear to be getting hung up because the ENOSPC stuff is flushing and waiting on ordered extents, and then the fixup worker is waiting on trying to reserve space. My hunch is the page that's in the fixup worker is attached to an ordered extent. I can pretty reliably reproduce this in the CI, so if you have trouble reproducing it let me know. I'll dig into it later today, but I may not get to it before you do. Thanks, Josef