jim owens wrote: > This is why it was done. In practice this was only used by kernel > callers because most application developers simply looped with a > fixed buffer and adjusted fm_start. Dumber applications kept > doubling their malloc to get all extents at once... or core dump :) I'm seeing that a database, VM, or filesystem-in-a-file app (anything like that using huge files) may want to optimise it's I/O scheduling and allocation pattern according to the estimated layout of it's open files... and some of those would like to be robust and not core dump when presented with really large files! :) And not core dumping does not mean "abandon the strategy when too large" either - it means have a strategy which scales :-) I'm also seeing block devices potentially offering a similar or even same interface. Now that block devices (LVM) have extents and underlying allocation strategies etc. much the same as modern filesystems. Perhaps LVM itself could support FIEMAP. From a database engine's point of view, there isn't really much difference any more except differing APIs and limitations. -- Jamie -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html