Re: [LSF/MM TOPIC] [ATTEND] Throttling I/O

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 25, 2013 at 09:52:33AM -0800, Tejun Heo wrote:
> Hey, guys.
> 
> On Fri, Jan 25, 2013 at 11:34:08AM -0500, Vivek Goyal wrote:
> > And I think tejun wanted to implement throttling at block layer and
> > wanted vm to adjust/respond to per group IO backlog when it comes
> > to writting to dirty data/inodes.
> > 
> > Once we have take care of writeback problem then comes the issue
> > of being able to associate a dirty inode/page to a cgroup. Not sure
> > if something has happened on that front or not. In the past it was
> > thought to be simple that one inode belongs to one IO cgroup.
> 
> Yeap, the above two sum it up pretty good.
> 
> > Also seriously, in CFQ, group idling performance penalty is too
> > high and might start showing up easily on a single spindle sata disk
> > also. Especially given the fact that people will come up with hybrid
> > SATA drives with some caching internally. So SATA drive will not
> > be as slow.
> > 
> > So proportional group scheduling of CFQ is limited to such a specific
> > corner case of slow SATA drive. I am not sure how many people really
> > use it.
> 
> I don't think so.  If you personal usages, sure, it's not very useful
> but then again proportional IO control itself isn't all that useful
> for personal use, but if you go to backend infrastructure requiring a
> lot of capacity, spindled drives still rule the roost and large
> deployment of on-device flash cache is not as immediate,

Hi Tejun,

How many of these spindle drives are not behind some kind of hardware
raid or on SAN network. Becaue any aggregation of spindle drives by
hardware/external entity makes group scheduling not worth it very soon.

> 
> For example, google has been using half-hacky hierarchical writeback
> support in cfq for quite some time now and they'll switch to upstream
> implementation once we get it working, so I don't think it's a wasted
> effort.

I guess apart from google I have not heard anybody else using it
successfully and that's when I get skeptic about it. May be once the
support for buffered write control is in, things will be better. Because,
that's the biggest offending workload people want to protect against.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux