Interpreting ceph osd pool stats output

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I've been putting together a collectd plugin for ceph - since the old
one's I could find no longer work. I'm gathering data from the mon's
admin socket, merged with a couple of commands I issue through the
rados mon_command interface.

Nothing complicated, but the data has me a little confused

When I run "osd pool stats" I get *two* different sets of metrics that
describe client i/o and recovery i/o. Since the metrics are different
I can't merge them to get a consistent view of what the cluster is
doing as a whole at any given point in time. For example, client i/o
reports in bytes_sec, but the recovery dict is empty and the
recovery_rate is in objects_sec...

i.e.

}, {
"pool_name": "rados-bench-cbt",
"pool_id": 86,
"recovery": {},
"recovery_rate": {
"recovering_objects_per_sec": 3530,
"recovering_bytes_per_sec": 14462655,
"recovering_keys_per_sec": 0,
"num_objects_recovered": 7148,
"num_bytes_recovered": 29278208,
"num_keys_recovered": 0
},
"client_io_rate": {}

This is running Jewel - 10.2.5-37.el7cp

Is this a bug or a 'feature' :)

Cheers,

Paul C
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux