On Tue, 2019-06-04 at 21:51 +0800, xxhdx1985126@xxxxxxxxx wrote: > From: Xuehan Xu <xxhdx1985126@xxxxxxx> > > Hi, ilya > > I've changed the code to add a new io controller policy that provide > the functionality to restrain cephfs generated io in terms of iops and > throughput. > > This inflexible appoarch is a little crude indeed, like tejun said. > But we think, this should be able to provide some basic io throttling > for cephfs kernel client, and can protect the cephfs cluster from > being buggy or even client applications be the cephfs cluster has the > ability to do QoS or not. So we are submitting these patches, in case > they can really provide some help:-) > > Xuehan Xu (2): > ceph: add a new blkcg policy for cephfs > ceph: use the ceph-specific blkcg policy to limit ceph client ops > > fs/ceph/Kconfig | 8 + > fs/ceph/Makefile | 1 + > fs/ceph/addr.c | 156 ++++++++++ > fs/ceph/ceph_io_policy.c | 445 ++++++++++++++++++++++++++++ > fs/ceph/file.c | 110 +++++++ > fs/ceph/mds_client.c | 26 ++ > fs/ceph/mds_client.h | 7 + > fs/ceph/super.c | 12 + > include/linux/ceph/ceph_io_policy.h | 74 +++++ > include/linux/ceph/osd_client.h | 7 + > 10 files changed, 846 insertions(+) > create mode 100644 fs/ceph/ceph_io_policy.c > create mode 100644 include/linux/ceph/ceph_io_policy.h > (cc'ing Tejun) This is interesting work, but it's not clear to me how you'd use this in practice. In particular, there are no instructions for users, and no real guidelines on when and how you'd want to set these values. Also, as Tejun pointed out, it's _really_ hard to parcel out resources properly when you don't have an accurate count of them. AIUI, that's the primary reason that the cgroup guys like interfaces that deal with percentages of a whole rather than discrete limits. I think we'd need to understand how we'd expect someone to use this in practice before we could merge this. At a bare minimum, a description of how you're setting them in your environment, and how you're gauging things like the total bandwidth and iops for the clients. -- Jeff Layton <jlayton@xxxxxxxxxx>