In my cluster every osd have 55GB (db val data in same device), ceph -v is 14.2.5. can anyone give me some idear to fix it? WeiGuo Ren <rwg1335252904@xxxxxxxxx> 于2023年4月24日周一 19:49写道: > > I have two osds. these osd are used to rgw index pool. After a lot of > stress tests, these two osds were written to 99.90%. The full ratio > (95%) did not take effect? I don't know much. Could it be that if the > osd of omap is fully stored, it cannot be limited by the full ratio? > ALSO I use ceph-bluestore-tool to expand it . Before I add a partition > . But i failed, I dont know why.