Re: [LSF/MM TOPIC] [ATTEND] Throttling I/O

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey, Suresh.

On Fri, Jan 25, 2013 at 06:49:34PM +0530, Suresh Jayaraman wrote:
> - Making cfq schedule the per cgroup sync/async queues according to I/O
>   weights would mean that we'll need to use per cgroup cfqq's instead
>   of per process? What will the impact on sync latencies if for example
>   we have many sync only tasks in one cgroup and many async tasks in
>   another?  What if BLK_CGROUP is not configured, what would be the
>   fallback behavior?

So, we currently have synd cfqqs in cgroup cfqgs and shared cfqqs in
the root cfqg.  The end result would be splitting shared cfqqs into
cgroup cfqgs.  We may have to change how cfqgs are chosen depending on
whether it only has async IOs pending.  Not sure.

> - Suppose if we have 100 cgroups and we are to have one cfqq per
>   priority per cgroup, this would mean we'll be requiring 100 x 3 x 8 =
>   2400 cfqq's (3 classes and 8 priorities) in the worst case (as
>   opposed to current 24 cfqqs)? This may not be as drastic as it sounds
>   as we create cfqq's only on demand and we normally won't have tasks
>   with every priority and every class?

I don't think that's a problem.  We already have a cfqq per active IO
context which can go way beyond 10k depending on work load.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux