Thanks Chad. It seems to be working. —Jiten On Nov 11, 2014, at 12:47 PM, Chad Seys <cwseys@xxxxxxxxxxxxxxxx> wrote: > Find out which OSD it is: > > ceph health detail > > Squeeze blocks off the affected OSD: > > ceph osd reweight OSDNUM 0.8 > > Repeat with any OSD which becomes toofull. > > Your cluster is only about 50% used, so I think this will be enough. > > Then when it finishes, allow data back on OSD: > > ceph osd reweight OSDNUM 1 > > Hopefully ceph will someday be taught to move PGs in a better order! > Chad. > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com