Hi,
not sure if this is what you need, but if you know the pool id (you
probably should) you could try this, it's from an Octopus test cluster
(assuming the warning was for the number of keys, not bytes):
$ ceph -f json pg dump pgs 2>/dev/null | jq -r '.pg_stats[] | select
(.pgid | startswith("17.")) | .pgid + " " +
"\(.stat_sum.num_omap_keys)"'
17.6 191
17.7 759
17.4 358
17.5 0
17.2 177
17.3 1
17.0 375
17.1 176
If you don't know the pool you could sort the ouput by the second
column and see which PG has the largest number of omap_keys.
Regards,
Eugen
Zitat von Frank Schilder <frans@xxxxxx>:
Hi all,
we had a bunch of large omap object warnings after a user deleted a
lot of files on a ceph fs with snapshots. After the snapshots were
rotated out, all but one of these warnings disappeared over time.
However, one warning is stuck and I wonder if its something else.
Is there a reasonable way (say, one-liner with no more than 120
characters) to get ceph to tell me which PG this is coming from? I
just want to issue a deep scrub to check if it disappears and going
through the logs and querying every single object for its key count
seems a bit of a hassle for something that ought to be part of "ceph
health detail".
Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx