Re: cap on writeback?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Mar 25, 2013 at 8:17 PM,  <Valdis.Kletnieks@xxxxxx> wrote:
> On Mon, 25 Mar 2013 17:23:40 -0700, Raymond Jennings said:
>
>> Is there some sort of mechanism that throttles the size of the writeback pool?
>
> There's a lot of tunables in /proc/sys/vm - everything from drop_caches
> to swappiness to vfs_cache_pressure.  Note that they all interact in mystical
> and hard-to-understand ways. ;)

I'm pretty familiar with this directory, but alas, I can find nothing
regarding writeback throttling that would limit the amount of data in
the "writeback" pool.

So again I ask, where is it?  Unless you are hinting I should search
the source myself ^^.

>> it's somewhat related to my brainfuck queue, since I would like to
>> stress test it digesting a huge pile of outbound data and seeing if it
>> can make writeback less seeky.
>
> The biggest challenge here is that there's a bit of a layering violation
> to be resolved - when the VM is choosing what pages get written out first,
> it really has no clue where on disk the pages are going.

Already realized this myself ^^

> Consider a 16M
> file that's fragged into 16 1M extents - they'll almost certainly hit
> the writeback queue in logical block order, not physical address order.
> The only really good choices here are to either allow the writeback queue
> to get deep enough that an elevator can do something useful (if you only
> have 2-3 IOs queued, you can do less than if you have 20-30 of them you
> can sort into some useful order), and filesystems that are less prone
> to fragmentation issues

Indeed, the filesystem really ought to be the one making decisions on
what to flush, and should be taking hints from the block layer given a
sector number.

> Just for the record, most of my high-performance stuff runs best with
> the noop scheduler - when you're striping I/O across several hundred disks,
> the last thing you want is some some single-minded disk scheduler re-arranging
> the I/Os and creating latency issues for your striping.

> Might want to think about why there's lots of man-hours spent doing
> new filesystems and stuff like zcache and kernel shared memory, but the
> only IO schedulers in tree are noop, deadline, and cfq :)

Hey, gotta cut my teeth somewhere. :)

_______________________________________________
Kernelnewbies mailing list
Kernelnewbies@xxxxxxxxxxxxxxxxx
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies




[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux