Hi, I was looking into doing delayed allocation for ext3 in page_mkwrite() and I've noticed that the way how ext4 estimates number of blocks it needs to reserve during delayed allocation is probably wrong when inode uses indirect blocks and not extents. The problem is with files with holes - for example we write blocks 1024, 2048, 3072, 4096, we have to allocate indirect block together with each data block we write. Therefore we need at least 8 blocks reserved but ext4_indirect_calc_metadata_amount() will reserve only 6 of them. We can only improve the estimate against the worst case (allocating all indirect blocks) if we know some block before the one for which we estimate needed amount is already allocated / has space reserved. But that's kind of non-trivial to detect. Maybe what we could do is look into pagecache, find the first page before the current one, check that it has mapped or delayed buffer and subtract indirect blocks for buffers in this page from those we need to reserve. But the locking is going to be nasty :( (we need page lock to safely investigate buffers in that page). Maybe counting with the worst case is better after all. What do you think? Honza -- Jan Kara <jack@xxxxxxx> SUSE Labs, CR -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html