rbd iostat requires pool specified

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hoping this may be trivial to point me towards, but I typically keep a background screen running `rbd perf image iostat` that shows all of the rbd devices with io, and how busy that disk may be at any given moment.

Recently after upgrading everything to latest octopus release (15.2.16), it no longer allows for not specifying the pool, which then means I can’t blend all rbd pools together into a single view.
How it used to appear:
> NAME                                WR    RD    WR_BYTES    RD_BYTES      WR_LAT    RD_LAT
> rbd-ssd/app1                     322/s   0/s   5.6 MiB/s       0 B/s     2.28 ms   0.00 ns
> rbd-ssd/app2                     223/s   5/s   2.1 MiB/s   147 KiB/s     3.56 ms   1.12 ms
> rbd-hybrid/app3                   76/s   0/s    11 MiB/s       0 B/s    16.61 ms   0.00 ns
> rbd-hybrid/app4                   11/s   0/s   395 KiB/s       0 B/s    51.29 ms   0.00 ns
> rbd-hybrid/app5                    3/s   0/s    74 KiB/s       0 B/s   151.54 ms   0.00 ns
> rbd-hybrid/app6                    0/s   0/s    42 KiB/s       0 B/s    13.90 ms   0.00 ns
> rbd-hybrid/app7                    0/s   0/s   2.4 KiB/s       0 B/s     1.70 ms   0.00 ns
> 
> NAME                                WR    RD    WR_BYTES   RD_BYTES     WR_LAT      RD_LAT
> rbd-ssd/app1                     483/s   0/s   7.3 MiB/s      0 B/s    2.17 ms     0.00 ns
> rbd-ssd/app2                     279/s   5/s   2.5 MiB/s   69 KiB/s    3.82 ms   516.30 us
> rbd-hybrid/app3                  147/s   0/s    10 MiB/s      0 B/s    8.59 ms     0.00 ns
> rbd-hybrid/app6                   10/s   0/s   425 KiB/s      0 B/s   75.79 ms     0.00 ns
> rbd-hybrid/app8                    0/s   0/s   2.4 KiB/s      0 B/s    1.85 ms     0.00 ns


> $ uname -r && rbd --version && rbd perf image iostat
> 5.4.0-107-generic
> ceph version 15.2.16 (d46a73d6d0a67a79558054a3a5a72cb561724974) octopus (stable)
> rbd: mgr command failed: (2) No such file or directory: [errno 2] RADOS object not found (Pool 'rbd' not found)

This is ubuntu 20.04, using packages rather than cephadm.
I do not have a pool named `rbd` so that is correct, but I have a handful of pools with the rbd application set.

> $ for pool in rbd-{ssd,hybrid,ec82} ; do ceph osd pool application get $pool ; done
> {
>     "rbd": {}
> }
> {
>     "rbd": {}
> }
> {
>     "rbd": {}
> }

Looking at the help output, it doesn’t seem to imply that the `pool-spec` is optional, and it won’t take wildcard globs like `rbd*` for the pool name.

> $ rbd help perf image iostat
> usage: rbd perf image iostat [--pool <pool>] [--namespace <namespace>]
>                              [--iterations <iterations>] [--sort-by <sort-by>]
>                              [--format <format>] [--pretty-format]
>                              <pool-spec>
> 
> Display image IO statistics.
> 
> Positional arguments
>   <pool-spec>                pool specification
>                              (example: <pool-name>[/<namespace>]
> 
> Optional arguments
>   -p [ --pool ] arg          pool name
>   --namespace arg            namespace name
>   --iterations arg           iterations of metric collection [> 0]
>   --sort-by arg (=write_ops) sort-by IO metric (write-ops, read-ops,
>                              write-bytes, read-bytes, write-latency,
>                              read-latency) [default: write-ops]
>   --format arg               output format (plain, json, or xml) [default:
>                              plain]
>   --pretty-format            pretty formatting (json and xml)

Setting a pool name to one of my rbd pools either as pool-spec or -p/—pool works, but obviously only for that pool, and not for *all* rbd pools, as it functioned previously, in what appears to have been 15.2.13 previously.
I didn’t see a PR mentioned in the 5.2.14-16 release notes that seemed to mention changes to rbd that would affect this, but I could have glossed over something.
Appreciate any pointers.

Thanks,
Reed
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux