Re: Implement QoS for CephFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 24, 2019 at 8:29 PM Songbo Wang <songbo1227@xxxxxxxxx> wrote:
>
> Hi guys,
>
> As a distributed filesystem, all clients of CephFS share the whole
> cluster's resources, for example, IOPS, throughput. In some cases,
> resources will be occupied by some clients. So QoS for CephFS is
> needed in most cases.
>
> Based on the token bucket algorithm, I implement QoS for CephFS.
>
> The basic idea is as follows:
>
>   1. Set QoS info as one of the dir's xattrs;
>   2. All clients can access the same dirs with the same QoS setting.
>   3. Similar to the Quota's config flow. when the MDS receives the QoS
> setting, it'll also broadcast the message to all clients.
>   4. We can change the limit online.
>
>
> And we will config QoS as follows, it supports
> {limit/burst}{iops/bps/read_iops/read_bps/write_iops/write_bps}
> configure setting, some examples:
>
>       setfattr -n ceph.qos.limit.iops           -v 200 /mnt/cephfs/testdirs/
>       setfattr -n ceph.qos.burst.read_bps -v 200 /mnt/cephfs/testdirs/
>       getfattr -n ceph.qos.limit.iops                      /mnt/cephfs/testdirs/
>       getfattr -n ceph.qos
> /mnt/cephfs/testdirs/
>
>
> But, there is also a big problem. For the bps{bps/write_bps/read_bps}
> setting, if the bps is lower than the request's block size, the client
> will be blocked until it gets enough token.
>
> Any suggestion will be appreciated, thanks!
>
> PR: https://github.com/ceph/ceph/pull/29266

I briefly skimmed this and if I understand correctly, this lets you
specify a per-client limit on hierarchies. But it doesn't try and
limit total IO across a hierarchy, and it doesn't let you specify
total per-client limits if they have multiple mount points.

Given this, what's the point of maintaining the QoS data in the
filesystem instead of just as information that's passed when the
client mounts?
How hard is this scheme likely to be to implement in the kernel?
-Greg



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Ceph Dev]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux