thx for your reply, it shows nothing,... there are no pgs on the osd,...
best regards
On 17.11.23 23:09, Eugen Block wrote:
After you create the OSD, run ‚ceph pg ls-by-osd {OSD}‘, it should
show you which PGs are created there and then you’ll know which pool
they belong to, then check again the crush rule for that pool. You can
paste the outputs here.
Zitat von Debian <debian@xxxxxxxxxx>:
Hi,
after a massive rebalance(tunables) my small SSD-OSDs are getting
full, i changed my crush rules so there are actual no pgs/pools on
it, but the disks stay full:
ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6)
nautilus (stable)
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP
META AVAIL %USE VAR PGS STATUS TYPE NAME
158 ssd 0.21999 1.00000 224 GiB 194 GiB 193 GiB 22 MiB 1002
MiB 30 GiB 86.68 1.49 0 up osd.158
inferring bluefs devices from bluestore path
1 : device size 0x37e4400000 : own 0x[1ad3f00000~23c600000] =
0x23c600000 : using 0x39630000(918 MiB) : bluestore has
0x46e2d0000(18 GiB) available
when i recreate the osd the osd gets full again
any suggestion?
thx & best regards
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx