Are you using snapshots? We had a situation where we took a snapshot
of a very large RBD image leading to near full OSDs within a few
hours. Removing that snapshot got us healthy again.
Zitat von Joshua Schaeffer <jschaeffer@xxxxxxxxxxxxxxx>:
On 8/20/22 13:49, Anthony D'Atri wrote:
Tiny OSDs? PoC cluster?
It's a PoC of sorts. There is actual data, but it is for a very
small project on older hardware.
`Ceph OSD DF`
user@ceph01:~$ sudo ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP
META AVAIL %USE VAR PGS STATUS
1 hdd 0.09769 0 0 B 0 B 0 B 0
B 0 B 0 B 0 0 0 down
2 hdd 0.09769 0 0 B 0 B 0 B 0
B 0 B 0 B 0 0 0 down
3 hdd 0.09769 0 0 B 0 B 0 B 0
B 0 B 0 B 0 0 0 down
4 hdd 0.09769 0 0 B 0 B 0 B 0
B 0 B 0 B 0 0 0 down
5 hdd 0.09769 1.00000 100 GiB 15 GiB 14 GiB 857 KiB
1023 MiB 85 GiB 15.27 1.34 36 up
6 hdd 0.09769 1.00000 100 GiB 12 GiB 11 GiB 680 KiB
1023 MiB 88 GiB 12.38 1.09 28 up
7 hdd 0.09769 1.00000 100 GiB 12 GiB 11 GiB 2.4 MiB
1022 MiB 88 GiB 12.04 1.06 28 up
8 hdd 0.09769 0 0 B 0 B 0 B 0
B 0 B 0 B 0 0 0 down
9 hdd 0.09769 1.00000 100 GiB 4.5 GiB 3.5 GiB 672 KiB
1023 MiB 95 GiB 4.51 0.40 8 up
10 hdd 0.09769 1.00000 100 GiB 13 GiB 12 GiB 475 KiB
1024 MiB 87 GiB 12.70 1.12 29 up
11 hdd 0.09769 1.00000 100 GiB 14 GiB 13 GiB 620 KiB
1023 MiB 86 GiB 13.69 1.20 32 up
12 hdd 0.09769 1.00000 100 GiB 11 GiB 9.9 GiB 969 KiB
1023 MiB 89 GiB 10.86 0.95 24 up
13 hdd 0.09769 1.00000 100 GiB 12 GiB 11 GiB 455 KiB
1024 MiB 88 GiB 12.11 1.06 28 up
14 hdd 0.09769 1.00000 100 GiB 12 GiB 11 GiB 410 KiB
1024 MiB 88 GiB 12.45 1.09 28 up
15 hdd 0.09769 1.00000 100 GiB 7.8 GiB 6.8 GiB 1.5 MiB
1023 MiB 92 GiB 7.76 0.68 17 up
16 hdd 0.09769 1.00000 0 B 0 B 0 B 0
B 0 B 0 B 0 0 0 down
TOTAL 1000 GiB 114 GiB 104 GiB 8.9 MiB 10
GiB 886 GiB 11.38
MIN/MAX VAR: 0/1.34 STDDEV: 4.43
--
Thanks,
Joshua Schaeffer
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx