This is a weird one ... which is good because it means the obvious ones have been fixed and now I'm just tripping over the weird cases. And fortunately, xfstests exercises the weird cases. 1. The file is 0x3d000 bytes long. 2. A readahead allocates an order-2 THP for 0x3c000-0x3ffff 3. We simulate a read error for 0x3c000-0x3cfff 4. Userspace writes to 0x3d697 to 0x3dfaa 5. iomap_write_begin() gets the 0x3c page, sees it's THP and !Uptodate so it calls iomap_split_page() (passing page 0x3d) 6. iomap_split_page() calls split_huge_page() 7. split_huge_page() sees that page 0x3d is beyond EOF, so it removes it from i_pages 8. iomap_write_actor() copies the data into page 0x3d 9. The write is lost. Trying to persuade XFS to update i_size before calling iomap_file_buffered_write() seems like a bad idea. Changing split_huge_page() to disregard i_size() is something I kind of want to be able to do long-term in order to make hole-punch more efficient, but that seems like a lot of work right now. I think the easiest way to fix this is to decline to allocate readahead pages beyond EOF. That is, if we have a file which is, say, 61 pages long, read the last 5 pages into an order-2 THP and an order-0 THP instead of allocating an order-3 THP and zeroing the last three pages. It's probably the right thing to do anyway -- we split THPs that overlap the EOF on a truncate. I'll start implementing this in the morning, but I thought I'd share the problem & proposed solution in case anybody has a better idea.