Re: [PATCH v2 2/4] ceph: track average/stdev r/w/m latency

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 9/14/21 10:00 PM, Venky Shankar wrote:
On Tue, Sep 14, 2021 at 7:22 PM Xiubo Li <xiubli@xxxxxxxxxx> wrote:

On 9/14/21 9:45 PM, Xiubo Li wrote:
On 9/14/21 9:30 PM, Venky Shankar wrote:
On Tue, Sep 14, 2021 at 6:39 PM Xiubo Li <xiubli@xxxxxxxxxx> wrote:
On 9/14/21 4:49 PM, Venky Shankar wrote:
[...]
In user space this is very easy to do, but not in kernel space,
especially there has no float computing.

As I remembered this is main reason why I was planing to send the raw
metrics to MDS and let the MDS do the computing.

So if possible why not just send the raw data to MDS and let the MDS to
do the stdev computing ?
Since metrics are sent each second (I suppose) and there can be N
operations done within that second, what raw data (say for avg/stdev
calculation) would the client send to the MDS?

Yeah.

For example, just send the "sq_sum" and the total numbers to MDS, these should be enough to compute the stdev. And in MDS or cephfs-top tool can just do it by int_sqrt(sq_sum / total).

I am okay with both and it's up to you, but the stdev could be more accurate in userspace with float computing.




Currently the kclient is doing the avg computing by:

avg(n) = (avg(n-1) + latency(n)) / (n), IMO this should be closer to
the real avg(n) = sum(latency(n), latency(n-1), ..., latency(1)) / n.

Because it's hard to record all the latency values, this is also many
other user space tools doing to count the avg.


Though current stdev computing method is not exactly the same the math
formula does, but it's closer to it, because the kernel couldn't record
all the latency value and do it whenever needed, which will occupy a
large amount of memories and cpu resources.
The approach is to calculate the running variance, I.e., compute the
variance as  data (latency) arrive one at a time.

    }

    void ceph_update_read_metrics(struct ceph_client_metric *m,
@@ -343,23 +352,18 @@ void ceph_update_read_metrics(struct
ceph_client_metric *m,
                              unsigned int size, int rc)
    {
        ktime_t lat = ktime_sub(r_end, r_start);
-     ktime_t total;

        if (unlikely(rc < 0 && rc != -ENOENT && rc != -ETIMEDOUT))
                return;

        spin_lock(&m->read_metric_lock);
-     total = ++m->total_reads;
        m->read_size_sum += size;
-     m->read_latency_sum += lat;
        METRIC_UPDATE_MIN_MAX(m->read_size_min,
                              m->read_size_max,
                              size);
-     METRIC_UPDATE_MIN_MAX(m->read_latency_min,
-                           m->read_latency_max,
-                           lat);
-     __update_stdev(total, m->read_latency_sum,
-                    &m->read_latency_sq_sum, lat);
+     __update_latency(&m->total_reads, &m->read_latency_sum,
+                      &m->avg_read_latency, &m->read_latency_min,
+                      &m->read_latency_max,
&m->read_latency_stdev, lat);
        spin_unlock(&m->read_metric_lock);
    }

@@ -368,23 +372,18 @@ void ceph_update_write_metrics(struct
ceph_client_metric *m,
                               unsigned int size, int rc)
    {
        ktime_t lat = ktime_sub(r_end, r_start);
-     ktime_t total;

        if (unlikely(rc && rc != -ETIMEDOUT))
                return;

        spin_lock(&m->write_metric_lock);
-     total = ++m->total_writes;
        m->write_size_sum += size;
-     m->write_latency_sum += lat;
        METRIC_UPDATE_MIN_MAX(m->write_size_min,
                              m->write_size_max,
                              size);
-     METRIC_UPDATE_MIN_MAX(m->write_latency_min,
-                           m->write_latency_max,
-                           lat);
-     __update_stdev(total, m->write_latency_sum,
-                    &m->write_latency_sq_sum, lat);
+     __update_latency(&m->total_writes, &m->write_latency_sum,
+                      &m->avg_write_latency, &m->write_latency_min,
+                      &m->write_latency_max,
&m->write_latency_stdev, lat);
        spin_unlock(&m->write_metric_lock);
    }

@@ -393,18 +392,13 @@ void ceph_update_metadata_metrics(struct
ceph_client_metric *m,
                                  int rc)
    {
        ktime_t lat = ktime_sub(r_end, r_start);
-     ktime_t total;

        if (unlikely(rc && rc != -ENOENT))
                return;

        spin_lock(&m->metadata_metric_lock);
-     total = ++m->total_metadatas;
-     m->metadata_latency_sum += lat;
-     METRIC_UPDATE_MIN_MAX(m->metadata_latency_min,
-                           m->metadata_latency_max,
-                           lat);
-     __update_stdev(total, m->metadata_latency_sum,
-                    &m->metadata_latency_sq_sum, lat);
+     __update_latency(&m->total_metadatas, &m->metadata_latency_sum,
+                      &m->avg_metadata_latency,
&m->metadata_latency_min,
+                      &m->metadata_latency_max,
&m->metadata_latency_stdev, lat);
        spin_unlock(&m->metadata_metric_lock);
    }
diff --git a/fs/ceph/metric.h b/fs/ceph/metric.h
index 103ed736f9d2..a5da21b8f8ed 100644
--- a/fs/ceph/metric.h
+++ b/fs/ceph/metric.h
@@ -138,7 +138,8 @@ struct ceph_client_metric {
        u64 read_size_min;
        u64 read_size_max;
        ktime_t read_latency_sum;
-     ktime_t read_latency_sq_sum;
+     ktime_t avg_read_latency;
+     ktime_t read_latency_stdev;
        ktime_t read_latency_min;
        ktime_t read_latency_max;

@@ -148,14 +149,16 @@ struct ceph_client_metric {
        u64 write_size_min;
        u64 write_size_max;
        ktime_t write_latency_sum;
-     ktime_t write_latency_sq_sum;
+     ktime_t avg_write_latency;
+     ktime_t write_latency_stdev;
        ktime_t write_latency_min;
        ktime_t write_latency_max;

        spinlock_t metadata_metric_lock;
        u64 total_metadatas;
        ktime_t metadata_latency_sum;
-     ktime_t metadata_latency_sq_sum;
+     ktime_t avg_metadata_latency;
+     ktime_t metadata_latency_stdev;
        ktime_t metadata_latency_min;
        ktime_t metadata_latency_max;






[Index of Archives]     [CEPH Users]     [Ceph Large]     [Ceph Dev]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux