[LSF/MM TOPIC] [ATTEND] Throttling I/O

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I'd like to discuss again[1] the problem of throttling buffered writes
and a throttle mechanism that works for all kinds of I/O.

Some background information.

During last year's LSF/MM, Fengguang discussed his proportional I/O
controller patches as part of the writeback session. The limitations
that were seen of his approach were a) non-handling of bursty IO
submission in the flusher thread b) sharing config variables among
different policies c) and that it violates layering and lacking
long-term design. Tejun proposed back-pressure approach to the problem
i.e. apply pressure where the problem is (block layer) and propagate
upwards.

The general opinion at that time was that we needed more
inputs/consensus needed on the natural, flexible, extensible
"interface". The discussion thread that Vivek started[2] to collect the
inputs on "interface", though resulted in good collection of inputs,
not sure whether it represents inputs from all the interested parties.

At Kernel Summit last year, I learned from LWN[3] that the topic was
discussed again. Tejun, apparently proposed a solution that splits up
the global async CFQ queue by cgroup, so that the CFQ scheduler can
easily schedule the per-cgroup sync/async queues according to the
per-cgroup I/O weights. Fengguang proposed a solution by supporting the
per-cgroup buffered write weights in balance_dirty_pages() and running a
user-space daemon that updates the CFQ/BDP weights every second. There
doesn't seem to be consensus towards either of the proposed approaches.

Looking at the possibility of prototyping Tejun's proposed idea lead to
many questions (but my understanding may not be complete here as it is
based only on LWN's mem-cg mini-summit coverage, so please correct me if
wrong).

- Making cfq schedule the per cgroup sync/async queues according to I/O
  weights would mean that we'll need to use per cgroup cfqq's instead
  of per process? What will the impact on sync latencies if for example
  we have many sync only tasks in one cgroup and many async tasks in
  another?  What if BLK_CGROUP is not configured, what would be the
  fallback behavior?

- Suppose if we have 100 cgroups and we are to have one cfqq per
  priority per cgroup, this would mean we'll be requiring 100 x 3 x 8 =
  2400 cfqq's (3 classes and 8 priorities) in the worst case (as
  opposed to current 24 cfqqs)? This may not be as drastic as it sounds
  as we create cfqq's only on demand and we normally won't have tasks
  with every priority and every class?

I'm primarily interested in having the ability to limit/throttle
buffered I/O on a multiuser system where one heavy I/O user shouldn't be
impacting others and everyone should be getting their allocated share. I
understand thought there are different possible use-cases and the agreed
approach should be limiting any potential use-case and hence having a
consensus is quite important. So, I think a discussion on the topic
might help.

I would also be interested in other Network filesystem topics that have
been already proposed including NFS Ganesha, readdirplus syscall etc. I
have been working on Network filesystems for many years now and recently
started looking into block layer side of things too.


[1] http://comments.gmane.org/gmane.linux.kernel.mm/74805 (Last year's
proposal)
[2] http://www.spinics.net/lists/linux-fsdevel/msg53171.html
[3] http://lwn.net/Articles/516540/


Thanks

-- 
Suresh Jayaraman
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux