Understanding op_r, op_w vs op_rw

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'd like to gain a better understanding about what operations emit which of these performance counters, in particular when is 'op_rw' incremented instead of 'op_r' + 'op_w'?

I've done a little bit of investigation (v12.2.13) , running various workoads and operations against an RBD volume (in a cluster with no other client activity):

- Most RBD 'operations'  (create, rm, features disable/enable, map, unmap) emit 'op_rw' and often 'op_w' too

- Program reads and writes against a mounted RBD volume *only* emit 'op_r' and 'op_w' (never 'op_rw'), regardless of whether they are 'read + modify' of existing file data (or whether the writes are buffered, direct or sync)

Is that correct? Or have I missed a program driven workload that will produce 'op_rw'? [1]

In our production clusters I'm seeing similar numbers of 'op_w' and 'op_rw' (for a given OSD), which would suggest a lot of RBD operations if it is only them that cause 'op_rw' counters to be emitted.

Cheers

Mark

[1] Tested using fio and pgbench (database benchmark). I mounted the volume using the kernel driver (I'll do some more experimentation using librbd)

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux