Re: [Lsf] IO less throttling and cgroup aware writeback (Was: Re: Preliminary Agenda and Activities for LSF)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Apr 22, 2011 at 06:28:29PM +0200, Andrea Arcangeli wrote:

[..]
> > Also it is only CFQ which provides READS so much preferrence over WRITES.
> > deadline and noop do not which we typically use on faster storage. There
> > we might take a bigger hit on READ latencies depending on what storage
> > is and how effected it is with a burst of WRITES.
> > 
> > I guess it boils down to better system control and better predictability.
> 
> I tend to think to get even better read latency and predictability,
> the IO scheduler could dynamically and temporarily reduce the max
> sector size of the write dma (and also ensure any read readahead is
> also reduced to the dynamic reduced sector size or it'd be detrimental
> on the number of read DMA issued for each userland read).
> 
> Maybe with tagged queuing things are better and the dma size doesn't
> make a difference anymore, I don't know. Surely Jens knows this best
> and can tell me if I'm wrong.
> 
> Anyway it should be real easy to test, just a two liner reducing the
> max sector size to scsi_lib and the max readahead, should allow you to
> see how fast firefox starts with cfq when dd if=/dev/zero is running
> and if there's any difference at all.

I did some quick runs.

- Default queue depth is 31 on my SATA disk. Reducing queue depth to 1
  helps a bit.

  In CFQ we already try to reduce the queue depth of WRITES if READS
  are going on.

- I reduced /sys/block/sda/queue/max_sector_kb to 16. That seemed to
  help with firefox launch time.

There are couple of interesting observations though.

- Even after I reduced max_sector_kb to 16, I saw requests of 1024 sector
  size coming from flusher threads. 

- Firefox launch time reduced by reducing the max_sector_kb but it did
  not help much when I tried to launch first website "lwn.net". It still
  took me little more than 1 minute, to be able to select lwn.net from
  cached entries and then be able to really load and display the page.

I will spend more time figuring out what's happening here.

But in general, reducing the max request size dynamically sounds
interesting. I am not sure how upper layers are impacted because
of this (dm etc).

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux