Hi,
we have cluster which running Ceph Luminous 12.2.12. Rados Gateway Only (S3).
Pool with data is placed on SAS HDDs (1430 pcs) and the rest of pools is placed on the SSD (72 pcs) disks - 72 hosts with OSD role (3 rows, 2 racks per row, and 12 hosts per rack). BlueStore of course.
The question is: how many PG we need for default.rgw.metadata? Any ideas?
Example statistics from this cluster:
pool default.rgw.buckets.data id 15
189946/9416348469 objects misplaced (0.002%)
recovery io 13.5MiB/s, 83objects/s
client io 531MiB/s rd, 51.4MiB/s wr, 13.24kop/s rd, 4.84kop/s wr
pool .rgw.root id 16
nothing is going on
pool default.rgw.control id 17
nothing is going on
pool default.rgw.meta id 18
client io 47.0MiB/s rd, 0B/s wr, 57.95kop/s rd, 450op/s wr
pool default.rgw.log id 19
nothing is going on
pool default.rgw.buckets.index id 20
client io 3.12MiB/s rd, 0B/s wr, 3.19kop/s rd, 1.92kop/s wr
189946/9416348469 objects misplaced (0.002%)
recovery io 13.5MiB/s, 83objects/s
client io 531MiB/s rd, 51.4MiB/s wr, 13.24kop/s rd, 4.84kop/s wr
pool .rgw.root id 16
nothing is going on
pool default.rgw.control id 17
nothing is going on
pool default.rgw.meta id 18
client io 47.0MiB/s rd, 0B/s wr, 57.95kop/s rd, 450op/s wr
pool default.rgw.log id 19
nothing is going on
pool default.rgw.buckets.index id 20
client io 3.12MiB/s rd, 0B/s wr, 3.19kop/s rd, 1.92kop/s wr
Regards,
Jarek
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx