Re: [PATCH v3] libceph: add osd op counter metric support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2020/11/11 1:11, Jeff Layton wrote:
On Tue, 2020-11-10 at 16:44 +0100, Ilya Dryomov wrote:
On Tue, Nov 10, 2020 at 3:19 PM <xiubli@xxxxxxxxxx> wrote:
From: Xiubo Li <xiubli@xxxxxxxxxx>

The logic is the same with osdc/Objecter.cc in ceph in user space.

URL: https://tracker.ceph.com/issues/48053
Signed-off-by: Xiubo Li <xiubli@xxxxxxxxxx>
---

V3:
- typo fixing about oring the _WRITE

  include/linux/ceph/osd_client.h |  9 ++++++
  net/ceph/debugfs.c              | 13 ++++++++
  net/ceph/osd_client.c           | 56 +++++++++++++++++++++++++++++++++
  3 files changed, 78 insertions(+)

diff --git a/include/linux/ceph/osd_client.h b/include/linux/ceph/osd_client.h
index 83fa08a06507..24301513b186 100644
--- a/include/linux/ceph/osd_client.h
+++ b/include/linux/ceph/osd_client.h
@@ -339,6 +339,13 @@ struct ceph_osd_backoff {
         struct ceph_hobject_id *end;
  };

+struct ceph_osd_metric {
+       struct percpu_counter op_ops;
+       struct percpu_counter op_rmw;
+       struct percpu_counter op_r;
+       struct percpu_counter op_w;
+};
OK, so only reads and writes are really needed.  Why not expose them
through the existing metrics framework in fs/ceph?  Wouldn't "fs top"
want to display them?  Exposing latency information without exposing
overall counts seems rather weird to me anyway.

The fundamental problem is that debugfs output format is not stable.
The tracker mentions test_readahead -- updating some teuthology test
cases from time to time is not a big deal, but if a user facing tool
such as "fs top" starts relying on these, it would be bad.

Thanks,

                 Ilya
Those are all good points. The tracker is light on details. I had
assumed that you'd also be uploading this to the MDS in a later patch.
Is that also planned?

Yeah, this is on my todo list.


I'll also add that it might be nice to keeps stats on copy_from2 as
well, since we do have a copy_file_range operation in cephfs.

Make sense and I will add it.

Thanks

BRs




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Ceph Dev]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux