Valerie Clement wrote:
As asked by Alex, I included in the test results the file fragmentation
level and the number of I/Os done during the file deletion.
Here are the results obtained with a not very fragmented 100-GB file:
| ext3 ext4 + extents xfs
------------------------------------------------------------
nb of fragments | 796 798 15
elapsed time | 2m0.306s 0m11.127s 0m0.553s
|
blks read | 206600 6416 352
blks written | 13592 13064 104
------------------------------------------------------------
hmm. if I did math right, then, in theory, 100GB file could be
placed using ~850 extents: 100 * 1024 / 120, where 120 is amount
of data one can allocate in regular group. 850 extents would
require 3 leaf blocks (340 extents/block) + 1 index block. we'd
need to read these 4 blocks + all 850 involved bitmaps + some
blocks of group descriptors. so, probably we need to tune balloc.
then we'd improve remove time by factor six (6400 blocks to read
vs. ~900-1000 blocks to read) ?
thanks, Alex
-
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html