Re: [Lsf] IO less throttling and cgroup aware writeback (Was: Re: Preliminary Agenda and Activities for LSF)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Apr 08, 2011 at 09:50:58AM -0400, Vivek Goyal wrote:
> On Fri, Apr 08, 2011 at 09:47:17AM +1000, Dave Chinner wrote:
> > On Thu, Apr 07, 2011 at 04:04:37PM -0400, Vivek Goyal wrote:
> > > On Thu, Apr 07, 2011 at 09:08:04AM +1000, Dave Chinner wrote:
> > > 
> > > [..]
> > > > > At the very least, when a task is moved from one cgroup to another,
> > > > > we've got a shared inode case.  This probably won't happen more than
> > > > > once for most tasks, but it will likely be common.
> > > > 
> > > > That's not a shared case, that's a transfer of ownership. If the
> > > > task changes groups, you have to charge all it's pages to the new
> > > > group, right? Otherwise you've got a problem where a task that is
> > > > not part of a specific cgroup is still somewhat controlled by it's
> > > > previous cgroup. It would also still influence that previous group
> > > > even though it's no longer a member. Not good for isolation purposes.
> > > > 
> > > > And if you are transfering the state, moving the inode from the
> > > > dirty list of one cgroup to another is trivial and avoids any need
> > > > for the dirty state to be shared....
> > > 
> > > I am wondering how do you map a task to an inode. Multiple tasks in the
> > > group might have written to same inode. Now which task owns it? 
> > 
> > That sounds like a completely broken configuration to me. If you are
> > using cgroups for isolation, you simple do not share *anything*
> > between them.
> > 
> > Right now the only use case that has been presented for shared
> > inodes is transfering a task from one cgroup to another.
> 
> Moving applications dynamically across cgroups happens quite often 
> just to put task in right cgroup after it has been launched

If it's just been launched, it won't have dirtied very many files so
I think shared dirty inodes for this use case is not an issue.

> or if
> a task has been running for sometime and system admin decides that
> it is causing heavy IO impacting other cgroup's IO. Then system
> admin might move it into a separate cgroup on the fly.

And I'd expect manual load balancing to be the exception rather than
the rule. Even so, if that process is doing lots of IO to the same
file as other tasks that it is interfering with, then there's an
application level problem there....

> > Why on
> > earth would you do that if it is sharing resources with other tasks
> > in the original cgroup? What use case does this represent, how often
> > is it likely to happen, and who cares about it anyway?
> 
> > 
> > Let's not overly complicate things by making up requirements that
> > nobody cares about....
> 
> Ok, so you are suggesting that always assume that only one task has
> written pages to inode and if that's not the case it is broken
> cofiguration. 

Not broken, but initially unsupported.

> So if a task moves across cgroups, determine the pages and associated
> inodes and move everything to the new cgroup. If inode happend to be
> shared, then inode moves irrespective of the fact somebody else also
> was doing IO to it. I guess reasonable first step.

It seems like the simplest way to start - once we have code that
works doing the simple things right we can start to complicate it ;)

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux