Re: [PATCH] cgroup: add a new group controller for cephfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 31 May 2019 at 04:59, Tejun Heo <tj@xxxxxxxxxx> wrote:
>
> Hello,
>
> On Wed, May 29, 2019 at 10:27:36AM +0800, Xuehan Xu wrote:
> > I think, since we are offering users an interface to control the io
> > reqs issuing rate, we'd better provide the interface through the io
> > controller, is this right?
>
> I'm not entirely sure what the right approach is here.  For most
> controllers, there are concrete resources which are being controlled
> even if it's a virtual resource like pids.  Here, it isn't clear how
> the resource should be defined.  Ideally, it should be defined as
> fractions / weights of whatever backends can do but that might not be
> that easy to define.
>
> Another issue is that non-work-conserving limits usually aren't enough
> to serve majority of use cases and it's better to at least consider
> how work-conserving control should look like before settling interface
> decisions.

Hi, Tejun.

The resource that we want to control is the ceph cluster's io
processing capability usage. And we are planning to control it in
terms of iops and io bandwidth. We are considering a more
work-conserving control mechanism that involves server side and are
more workload self-adaptive. But, for now, as we mostly concern about
the scenario that a single client use up the whole cluster's io
capability, we think maybe we should implement a simple client-side io
throttling first, like the blkio controller's io throttle policy,
which would be relatively easy. On the other hand, we should provide
users io throttling capability even if their servers don't support the
sophisticated QoS mechanism. Am I right about this? Thanks:-)

>
> > Actually, for now, we are considering implement a ceph-specific
> > "blkcg_policy" which adds new io controller "cf" files to let users
> > modify configurations of the ceph-specific "blkcg_policy" and limit
> > the ceph reqs sent to the underlying cluster all by itself rather than
> > relying on the existing blkcg_policies like io latency or io throttle.
> > Is this the right way to go? Thanks:-)
>
> Can we take a step back and think through what the fundamental
> resources are?  Do these control knobs even belong to the client
> machines?

Since we need to control the cluster's io resource usage in the
granularity of docker instances, we need the clients to offer control
group information to the servers even in the scenario that involves
server-side QoS as only clients know which docker instance the
requesting process belongs to. So we think, either way, we need some
kind of cgroup relative functionality on the client side. Is this
right? Thanks:-)

>
> Thanks.
>
> --
> tejun



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [Monitors]

  Powered by Linux