[LSF/FS TOPIC] I/O performance isolation for shared storage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I/O performance is the bottleneck in many systems, from phones to
servers. Knowing which request to schedule at any moment is crucial to
systems that support interactive latencies and high throughput.  When
you're watching a video on your desktop, you don't want it to skip
when you build a kernel.

To address this in our environment Google has now deployed the
blk-cgroup code worldwide, and I'd like to share some of our
experiences. We've made modifications for our purposes, and are in the
process of proposing those upstream:

  - Page tracking for buffered writes
  - Fairness-preserving preemption across cgroups

There is further work to do along the lines of fine-grained accounting
and isolation. For example, many file servers in a Google cluster will
do IO on behalf of hundreds, even thousands of clients. Each client
has different service requirements, and it's inefficient to map them
to (cgroup, task) pairs.
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux