On Wed, Oct 13, 2010 at 01:14:08AM +0200, Jan Kara wrote: > c) When we decide some reservation scheme is unavoidable, there is question > how to estimate amount of indirect blocks. My scheme is one possibility, > but there is a wider variety of tradeoffs between complexity and > accuracy. A special low effort, low impact possibility here might be to > just ignore the ENOSPC problem as we did so far, reserve only quota for > data block on page fault, and rely on the fact that there isn't going to > be that much metadata so user cannot exceed his quota limit by too > much... But when we already have the interface change, it seems a bit > stupid not to fix it properly and also handle ENOSPC with it. We ultimately decided to do two different things for ENOSPC versus EDQUOTA in ext4. For quota overflow we just assume that the number of metadata blocks won't be that many, and just allow them to go over quota. For ENOSPC, we would force writeback to see if it would free space, and ultimately we would drop out of delayed allocation mode when we were close to running out of space (and for non-root users we would depend on the 5% blocks reserved for root). Yeah, that means if root application mmap's a huge 100GB sparse region, and we only have 2GB free in the file system, and then the application proceeds to write to all 100GB of mmap'ed region, there's a chance data might get silently lost when we drop out of delalloc mode and we then really do completely run out of memory. But really, what we are we supposed do? Unless you have the kernel break out in hysterical laughter and reject the mmap at allocation time, I suppose the only other thing we could do, if silently dropping data is unacceptable, is we can send the SEGV early even though we might have a few blocks left. That way the data loss isn't silent (the application will probably drop core and die instead), so it's no longer our problem. :-) - Ted -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html