Hi all, The test case is a simple sequential write to XFS backed by a thin volume. The test vm is running latest 6.3.0-rc7, has 8xcpu and 8GB RAM, and the thin volume is backed by sufficient space in the thin pool. I.e.: lvcreate --type thin-pool -n tpool -L30G test lvcreate -V 20G -n tvol test/tpool mkfs.xfs /dev/test/tvol mount /dev/test/tvol /mnt dd if=/dev/zero of=/mnt/file bs=1M The dd command writes until ~1GB or so free space is left in the fs and then seems to hit a livelock. From a quick look at tracepoints, XFS seems to be spinning in the xfs_convert_blocks() writeback path. df shows space consumption no longer changing, the flush worker is spinning at 100% and dd is blocked in balance_dirty_pages(). If I kill dd, the writeback worker continues spinning and an fsync of the file blocks indefinitely. If I reset the vm, remount and run the following: dd if=/dev/zero of=/mnt/file bs=1M conv=notrunc oflag=append ... it then runs to -ENOSPC, as expected. I haven't seen this occur when running on a non-thin lvm volume, not sure why. What is also interesting is that if I rm the file and repeat on the thin volume (so the the thin volume is pretty much fully mapped at this point), the problem still occurs. This doesn't reproduce on v6.2. Given the number of XFS changes and the behavior above, it sort of smells more like an XFS issue than dm, but I've no real evidence of that. Regardless, I ran a bisect over related XFS commits and it implicated either of the two following commits: 85843327094f ("xfs: factor xfs_bmap_btalloc()") 74c36a8689d3 ("xfs: use xfs_alloc_vextent_this_ag() where appropriate") More specifically, 85843327094f is the first commit that conclusively exhibits the problem. 74c36a8689d3 is inconclusive because I run into an almost instant shutdown when running the test. If I take one more step back to commit 4811c933ea1a ("xfs: combine __xfs_alloc_vextent_this_ag and xfs_alloc_ag_vextent"), the problem doesn't occur. Brian