Re: Interpreting ceph osd pool stats output

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Mar 12, 2017 at 9:49 AM, John Spray <jspray@xxxxxxxxxx> wrote:
> On Fri, Mar 10, 2017 at 8:52 PM, Paul Cuzner <pcuzner@xxxxxxxxxx> wrote:
>> Thanks John
>>
>> This is weird then. When I look at the data with client load I see the
>> following;
>> {
>> "pool_name": "default.rgw.buckets.index",
>> "pool_id": 94,
>> "recovery": {},
>> "recovery_rate": {},
>> "client_io_rate": {
>> "read_bytes_sec": 19242365,
>> "write_bytes_sec": 0,
>> "read_op_per_sec": 12514,
>> "write_op_per_sec": 0
>> }
>>
>> No object related counters - they're all block based. The plugin I
>> have rolls-up the block metrics across all pools to provide total
>> client load.
>
> Where are you getting the idea that these counters have to do with
> block storage?  What Ceph is telling you about here is the number of
> operations (or bytes in those operations) being handled by OSDs.
>

Perhaps it's my poor choice of words - apologies.

read_op_per_sec is read IOP count to the OSDs from client activity
against the pool

My point is that client-io is expressed in these terms, but recovery
activity is not. I was hoping that both recovery and client I/O would
be reported in the same way so you gain a view of the activity of the
system as a whole. I can sum bytes_sec from client i/o with
recovery_rate bytes_sec, which is something, but I can't see inside
recovery activity to see how much is read or write, or how much IOP
load is coming from recovery.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux