Hi,
thanks for looking into this: our system disks also wear out too quickly!
Here are the numbers on our small cluster.
Best,
1) iotop results:
TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND
TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND
6426 be/4 ceph 0.00 B 590.00 K ?unavailable? ceph-mon
-f --cluster ceph --id hpc1a --setuser ceph --setgroup ceph [log]
6813 be/4 ceph 275.49 M 275.93 M ?unavailable? ceph-mon
-f --cluster ceph --id hpc1a --setuser ceph --setgroup ceph [rocksdb:low0]
6814 be/4 ceph 145.00 K 30.93 M ?unavailable? ceph-mon
-f --cluster ceph --id hpc1a --setuser ceph --setgroup ceph [rocksdb:high0]
7087 be/4 ceph 25.87 M 14.54 M ?unavailable? ceph-mon
-f --cluster ceph --id hpc1a --setuser ceph --setgroup ceph [fn_monstore]
7094 be/4 ceph 12.02 M 7.46 M ?unavailable? ceph-mon
-f --cluster ceph --id hpc1a --setuser ceph --setgroup ceph [safe_timer]
7099 be/4 ceph 33.00 K 74.00 K ?unavailable? ceph-mon
-f --cluster ceph --id hpc1a --setuser ceph --setgroup ceph [ms_dispatch]
2) manual compactions:
Fri 13 Oct 2023 11:26:07 AM CEST
523
3) monitor store.db size:
8.0M /var/lib/ceph/mon/ceph-hpc1a/store.db/
4) cluster version and status:
ceph version 16.2.13 (b81a1d7f978c8d41cf452da7af14e190542d2ee2) pacific
(stable)
cluster:
id: b351decf-4168-45ec-b8de-372051cf634a
health: HEALTH_OK
services:
mon: 5 daemons, quorum hpc1a,hpc1b,hpc2c,hpcg2,hpc2d (age 7d)
mgr: hpc1a(active, since 9d), standbys: hpc2c, hpcg2
mds: 9/9 daemons up, 4 standby
osd: 13 osds: 13 up (since 11d), 13 in (since 7w)
data:
volumes: 9/9 healthy
pools: 21 pools, 475 pgs
objects: 27.53M objects, 13 TiB
usage: 34 TiB used, 24 TiB / 57 TiB avail
pgs: 475 active+clean
io:
client: 4.9 MiB/s rd, 61 MiB/s wr, 12 op/s rd, 993 op/s wr
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx