Re: [PATCH v6 0/9] memcg: per cgroup dirty page accounting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 11 Mar 2011 10:43:22 -0800
Greg Thelen <gthelen@xxxxxxxxxx> wrote:

>
> ...
> 
> This patch set provides the ability for each cgroup to have independent dirty
> page limits.

Here, it would be helpful to describe the current kernel behaviour. 
And to explain what is wrong with it and why the patch set improves
things!

> 
> ...
>
> Known shortcomings (see the patch 1/9 update to Documentation/cgroups/memory.txt
> for more details):
> - When a cgroup dirty limit is exceeded, then bdi writeback is employed to
>   writeback dirty inodes.  Bdi writeback considers inodes from any cgroup, not
>   just inodes contributing dirty pages to the cgroup exceeding its limit.  

This is a pretty large shortcoming, I suspect.  Will it be addressed?

There's a risk that a poorly (or maliciously) configured memcg could
have a pretty large affect upon overall system behaviour.  Would
elevated premissions be needed to do this?

We could just crawl the memcg's page LRU and bring things under control
that way, couldn't we?  That would fix it.  What were the reasons for
not doing this?

> - A cgroup may exceed its dirty limit if the memory is dirtied by a process in a
>   different memcg.

Please describe this scenario in (a lot) more detail?


--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux