I did say I'd test using librbd - and this changes my observations.
Using fio configured with the rbd driver:
- a random write workload emits about equal 'op_w' and 'op_rw'
initially, then just 'op_w' (until filled in sparse allocation maybe)?
So this certainly does help me understand why I'm seeing a lot of
'op_rw', but any further clarification appreciated!
regards
Mark
On 2/09/20 6:17 pm, Mark Kirkwood wrote:
Hi,
I'd like to gain a better understanding about what operations emit
which of these performance counters, in particular when is 'op_rw'
incremented instead of 'op_r' + 'op_w'?
I've done a little bit of investigation (v12.2.13) , running various
workoads and operations against an RBD volume (in a cluster with no
other client activity):
- Most RBD 'operations' (create, rm, features disable/enable, map,
unmap) emit 'op_rw' and often 'op_w' too
- Program reads and writes against a mounted RBD volume *only* emit
'op_r' and 'op_w' (never 'op_rw'), regardless of whether they are
'read + modify' of existing file data (or whether the writes are
buffered, direct or sync)
Is that correct? Or have I missed a program driven workload that will
produce 'op_rw'? [1]
In our production clusters I'm seeing similar numbers of 'op_w' and
'op_rw' (for a given OSD), which would suggest a lot of RBD operations
if it is only them that cause 'op_rw' counters to be emitted.
Cheers
Mark
[1] Tested using fio and pgbench (database benchmark). I mounted the
volume using the kernel driver (I'll do some more experimentation
using librbd)
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx