On Mon, 25 Mar 2013 17:23:40 -0700, Raymond Jennings said: > Is there some sort of mechanism that throttles the size of the writeback pool? There's a lot of tunables in /proc/sys/vm - everything from drop_caches to swappiness to vfs_cache_pressure. Note that they all interact in mystical and hard-to-understand ways. ;) > it's somewhat related to my brainfuck queue, since I would like to > stress test it digesting a huge pile of outbound data and seeing if it > can make writeback less seeky. The biggest challenge here is that there's a bit of a layering violation to be resolved - when the VM is choosing what pages get written out first, it really has no clue where on disk the pages are going. Consider a 16M file that's fragged into 16 1M extents - they'll almost certainly hit the writeback queue in logical block order, not physical address order. The only really good choices here are to either allow the writeback queue to get deep enough that an elevator can do something useful (if you only have 2-3 IOs queued, you can do less than if you have 20-30 of them you can sort into some useful order), and filesystems that are less prone to fragmentation issues Just for the record, most of my high-performance stuff runs best with the noop scheduler - when you're striping I/O across several hundred disks, the last thing you want is some some single-minded disk scheduler re-arranging the I/Os and creating latency issues for your striping. Might want to think about why there's lots of man-hours spent doing new filesystems and stuff like zcache and kernel shared memory, but the only IO schedulers in tree are noop, deadline, and cfq :)
Attachment:
pgpcDZv6dGRix.pgp
Description: PGP signature
_______________________________________________ Kernelnewbies mailing list Kernelnewbies@xxxxxxxxxxxxxxxxx http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies