On 10/24/19 6:54 PM, Thomas Schneider wrote:
this is understood. I needed to start reweighting specific OSD because rebalancing was not working and I got a warning in Ceph that some OSDs are running out of space.
Still, the main your issue is that your buckets is uneven, 350TB vs 79TB, more that 4 times.
I suggest to you disable multiroot (use only default), use your 1.6Tb drives from current default root (I count ~48 1.6Tb OSD's).
And mix your OSD's in hosts to be more evenly distributed in cluster - this is one of basic Ceph best practices.
Also you can try to use offline upmap method, some folks get better results with this (don't forget to disable balancer):
`ceph osd getmap -o om; osdmaptool om --upmap upmap.sh --upmap-deviation 0; bash upmap.sh; rm -f upmap.sh om`
k _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx