Re: [LSF/FS TOPIC] I/O performance isolation for shared storage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri 04-02-11 15:07:15, Chad Talbott wrote:
> > Also curious to know if per memory cgroup dirty ration stuff got in and how
> > did we deal with the issue of selecting which inode to dispatch the writes
> > from based on the cgroup it belongs to.
> 
> We have some experience with per-cgroup writeback under our fake-NUMA
> memory container system. Writeback under memcg will likely face
> similar issues.  See Greg Thelen's topic description at
> http://article.gmane.org/gmane.linux.kernel.mm/58164 for a request for
> discussion.
> 
> Per-cgroup dirty ratios is just the beginning, as you mention.  Unless
> the IO scheduler can see the deep queues of all the blocked tasks, it
> can't make the right decisions.  Also, today writeback is ignorant of
> the tasks' debt to the IO scheduler, so it issues the "wrong" inodes.
  I'm curious: Could you elaborate a bit more about this? I'm not sure what
a debt to the IO scheduler is and why choice of inodes would matter...
Thanks.

								Honza
-- 
Jan Kara <jack@xxxxxxx>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux