On Tue, May 15, 2018 at 10:06:23PM +0900, Wang Shilong wrote: > From: Wang Shilong <wshilong@xxxxxxx> > > During our benchmarking, we found sometimes writing > performances are not stable enough and there are some > small read during write which could drop throughput(~30%). Out of curiosity, what sort of benchmarks are you doing? > It turned out that block bitmaps loading could make > some latency here,also for a heavy fragmented filesystem, > we might need load many bitmaps to find some free blocks. > > To improve above situation, we had a patch to load block > bitmaps to memory and pin those bitmaps memory until umount > or we release the memory on purpose, this could stable write > performances and improve performances of a heavy fragmented > filesystem. This is true, but I wonder how realistic this is on real production systems. For a 1 TiB file system, pinning all of the block bitmaps will require 32 megabytes of memory. Is that really realistic for your use case? So is this something just for benchmarking (in which case what are you trying to benchmark)? Or is this something that you want to use in production? And if so, perhaps something to consider is analyzing how fragmented and how full you want to run your file system. Something to perhaps consider doing is storing the bitmap in memory in a compressed form. For example, you could use a run length encoding scheme where 2 bytes is used to encoding the starting block of a free extent, and 2 bytes to encode the length of the free extent. For a large number of mostly full (or mostly empty, for that matter) block allocation bitmaps, this will be a much more efficient way to cache the information in memory if you really want to keep all of the allocation information in memory. Something else to investigate is *why* is the file system getting so fragmented in the first place, and are there things we can do to prevent the file system from getting that fragmented in the first place.... - Ted