Re: Does cephfs subvolume have commands similar to `rbd perf` to query iops, bandwidth, and latency of rbd image?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 14, 2023 at 12:05 AM 郑亮 <zhengliang0901@xxxxxxxxx> wrote:
>
> Hi all,
>
> Does cephfs subvolume have commands similar to rbd perf to query iops,
> bandwidth, and latency of rbd image?  `ceph fs perf stats` shows metrics of
> the client side, not the metrics of the cephfs subvolume. What I want to
> get is the metrics at the subvolume level like below.

You'd need to mount the subvolume to get stats for it via the `ceph fs
perf stats' command you mention and/or (preferably) using
cephfs-top[0] tool.

[0]: https://docs.ceph.com/en/latest/cephfs/cephfs-top/

>
> [root@smd-exporter-5f87dcb946-wt7tl /]# rbd perf image iostat
> rbd: waiting for initial image stats
>
> NAME                                                WR    RD
> WR_BYTES   RD_BYTES     WR_LAT    RD_LAT
> pool/csi-vol-5c2d22e3-9195-11ed-aab9-a6732a88c7dd  8/s   0/s   215
> KiB/s      0 B/s   14.62 ms   0.00 ns
> pool/csi-vol-10e31971-9196-11ed-aab9-a6732a88c7dd  5/s   0/s    79
> KiB/s      0 B/s   13.18 ms   0.00 ns
> pool/csi-vol-0f6e0dbe-9196-11ed-aab9-a6732a88c7dd  1/s   0/s    23
> KiB/s      0 B/s   20.60 ms   0.00 ns
> pool/csi-vol-88704e7a-919c-11ed-aab9-a6732a88c7dd  1/s   0/s    12
> KiB/s      0 B/s   12.38 ms   0.00 ns
> pool/csi-vol-a87c21e9-919c-11ed-aab9-a6732a88c7dd  1/s   0/s    14
> KiB/s      0 B/s   13.21 ms   0.00 ns
> pool/csi-vol-b6d040a2-919c-11ed-aab9-a6732a88c7dd  0/s   0/s     8
> KiB/s      0 B/s    9.88 ms   0.00 ns
> pool/csi-vol-efb7c236-6fc9-11ed-aab9-a6732a88c7dd  0/s   0/s   3.2
> KiB/s      0 B/s    2.96 ms   0.00 ns
>
> NAME                                                 WR    RD
> WR_BYTES   RD_BYTES     WR_LAT    RD_LAT
> pool/csi-vol-5c2d22e3-9195-11ed-aab9-a6732a88c7dd  12/s   0/s   165
> KiB/s      0 B/s   26.07 ms   0.00 ns
> pool/csi-vol-10e31971-9196-11ed-aab9-a6732a88c7dd   4/s   0/s    91
> KiB/s      0 B/s    9.83 ms   0.00 ns
> pool/csi-vol-b6d040a2-919c-11ed-aab9-a6732a88c7dd   1/s   0/s    22
> KiB/s      0 B/s   15.25 ms   0.00 ns
> pool/csi-vol-efb7c236-6fc9-11ed-aab9-a6732a88c7dd   1/s   0/s    38
> KiB/s      0 B/s   20.40 ms   0.00 ns
> pool/csi-vol-88704e7a-919c-11ed-aab9-a6732a88c7dd   0/s   0/s   9.6
> KiB/s      0 B/s   12.96 ms   0.00 ns
> pool/csi-vol-a87c21e9-919c-11ed-aab9-a6732a88c7dd   0/s   0/s    10
> KiB/s      0 B/s    3.99 ms   0.00 ns
> pool/csi-vol-0f6e0dbe-9196-11ed-aab9-a6732a88c7dd   0/s   0/s   2.4
> KiB/s      0 B/s    7.33 ms   0.00 ns
>
> Best Regards,
> Liang Zheng
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>


-- 
Cheers,
Venky
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux