Re: [PATCH 0/2] control cephfs generated io with the help of cgroup io controller

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 7 Jun 2019 at 03:15, Tejun Heo <tj@xxxxxxxxxx> wrote:
>
> Hello,
>
> On Thu, Jun 06, 2019 at 03:12:10PM -0400, Jeff Layton wrote:
> > (cc'ing Tejun)
>
> Thanks for the cc.  I'd really appreciate if you guys keep me in the
> loop.
>
> --
> tejun

Hi, tejun.

I'm really sorry that I didn't send these modifications to you. I
thought it would be impolite and may piss you off if I insist on
submitting the patches that you had pushed back. But, on the other
hand, we really think that a some kind of simple io throttling
mechanism, although may not work very well, may provide some basic
functionality to restrain the io pressure that comes from a single
client. Actually, that's the case in our production CephFS clusters,
in which we have only one active metadata node and there are times
that some minor crazy clients send out large amounts of
getattr/lookup/open ops to the metadata node and make other clients'
metadata ops' response time increases significantly. We think if we
can limit the ops issued by a single client, the total ops sent out by
some minor clients would be also limited to a relatively low level.
This is indeed far from sufficient to provide perfect io QoS service,
but it could help before a full-functioning io QoS service is in
position. So I thought maybe I can first discuss this with CephFS
guys, and if they don't agree, I will back down. Again, I'm really
sorry that I didn't add you to this discussion, please forgive me:-)

Hi, jeff

According to our observation, the number of crazy clients was always
10 to 15, and normal clients' metadata ops issuing rate is below 80
per second. And according to our stress test to the MDS, it can
provide 11000 getattrs and 3000 file/dir creation per second. So we
thought we could suggest the users to set their  metadata iops to 100,
which should be sufficient for their work and won't cause severe
damage to the whole system when it comes too crazy. This approach is
indeed primitive, but since a full-scale io QoS service is not
available and relatively hard to implement, we thought this should
provide some help:-)

Thanks for you guys' help:-)



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux