Re: Options for RADOS client-side write latency monitoring

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

In my opinion the better way is to deploy a batch fio pod (PVC volume on
your rook ceph) on your K8S.
IO profile depend of your workload but you can try 8Kb (postgresql default)
random read/write and seq
In this way, you will be as close as possible from the client side
Export on Json the result and just graph it

Regards,

Stéphane


Le mer. 18 mai 2022 à 00:01, <jules@xxxxxx> a écrit :

> Greetings, all.
>
> I'm attempting to introduce client-side RADOS write latency monitoring on a
> (rook) Ceph cluster. The use case is a mixture of containers, serving file
> and
> database workloads (although my question my applies more broadly.)
>
> The aim here is to measure the average write latency as observed by a
> client,
> rather than relying entirely on the metrics reported by the OSDs
> (i.e ceph_osd_commit_latency_ms and ceph_osd_apply_latency_ms.)
>
> So far, I’ve tested using `rados bench` to produce some basic write latency
> monitoring using a shell script.
>
> The parameters I’m using:
> •    Single thread
> •    64KB block size
> •    10 seconds to benchmark
>
> Essentially, the script parses output (average latency) from the following:
>
>     rados bench --pool=xxx 10 write -t 1 -b 65536
>
> Questions:
>
> 1. Are the parameters outlined above optimal for this kind of performance
> monitoring (for example, would it be better to use a block size of 4KB, or
> even 1KB)?
>
> 2. Is there a better approach here (for example, using a ceph-manager
> plugin or other more standard approach)?
>
> Thanks!
>
> Best regards,
>
> Jules
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux