Re: nilfs_cleanerd using a lot of disk-write bandwidth

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tuesday 09 of August 2011 14:25:07 you wrote:
>  Interesting. I still think something should be done to minimize the
>  amount of writes required. How about something like the following.
>  Divide situations into 3 classes (thresholds should be adjustable in
>  nilfs_cleanerd.conf):
> 
>  1) Free space good (e.g. space >= 25%)
>  Don't do any garbage collection at all, unless an entire block contains
>  only garbage.
> 
>  2) Free space low (e.g. 10% < space < 25%)
>  Run GC as now, with the nice/ionice applied. Only GC blocks where
>  $block_free_space_percent >= $disk_free_space_percent. So as the disk
>  space starts to decrease, the number of blocks that get considered for
>  GC increase, too.
> 
>  3) Free space critical (e.g. space < 10%)
>  As 2) but start decreasing niceness/ioniceness (niceness by 3 for every
>  1% drop in free space, so for example:
>  10% - 19
>  ...
>  7% - 10
>  ...
>  4% - 1
>  3% - -2
>  ...
>  1% - -8
> 
>  This would give a very gradual increase in GC aggressiveness that would
>  both minimize unnecessary writes that shorted flash life and provide a
>  softer landing in terms of performance degradation as space starts to
>  run out.
> 
>  The other idea that comes to mind on top of this is to GC blocks in
>  order of % of space in the block being reclaimable. That would allow for
>  the minimum number of blocks to always be GC-ed to get the free space
>  above the required threshold.
> 
>  Thoughts?


Could end up being too slow. A 2TB filesystem has about 260'000 segments (given 
the default size of 8MB). cleanerd already takes quite a bit of CPU power  at 
times.

Also, cleanerd can do a lot of HDD seeks, if some parts of metadata aren't in 
cache. Performing some 260'000 seeks on a harddrive would take anywhere from 
1000 to 3000 seconds; that not very interactive. Actually, it gets dangerously 
close to an hour.

However, if the cleanerd did not have to follow this exact algorithm, but 
instead id something roughly similar (heueristics rather than algorithm), it 
could be good enough.

Possibly related, I'd love if cleanerd tented to do some mild de-fragmentation 
of files. Not necessarily full-blown, exact defragmentation, just placing quite 
stuff close together.


-- 
dexen deVries

[[[↓][→]]]

For example, if the first thing in the file is:
   <?kzy irefvba="1.0" rapbqvat="ebg13"?>
an XML parser will recognize that the document is stored in the traditional 
ROT13 encoding.

(( Joe English, http://www.flightlab.com/~joe/sgml/faq-not.txt ))
--
To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Filesystem Development]     [Linux BTRFS]     [Linux CIFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux