How many OSDs are nearfull?
Keep in mind that ceph osd reweight is temporary. If you mark an osd OUT then IN, the weight will be set to 1.0. If you need something that's persistent, you can use ceph osd crush reweight osd.NUM <crust_weight>. Look at ceph osd tree to get the current weight.
On Tue, Nov 11, 2014 at 12:47 PM, Chad Seys <cwseys@xxxxxxxxxxxxxxxx> wrote:
Find out which OSD it is:
ceph health detail
Squeeze blocks off the affected OSD:
ceph osd reweight OSDNUM 0.8
Repeat with any OSD which becomes toofull.
Your cluster is only about 50% used, so I think this will be enough.
Then when it finishes, allow data back on OSD:
ceph osd reweight OSDNUM 1
Hopefully ceph will someday be taught to move PGs in a better order!
Chad.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com