Hi Dan, I'd like to decommission a node to reproduce the problem and post enough information for you (at least) to understand what is going on. Unfortunately I'm a ceph newbie, so I'm not sure what info would be of interest before/during the drain. Probably the crushmap would be of interest. Pre-decommision (the interesting parts?): root default { id -1 # do not change unnecessarily # weight 21.890 alg straw hash 0 # rjenkins1 item osd01 weight 2.700 item osd03 weight 3.620 item osd05 weight 1.350 item osd06 weight 2.260 item osd07 weight 2.710 item osd08 weight 2.030 item osd09 weight 1.800 item osd02 weight 1.350 item osd10 weight 4.070 } # rules rule data { ruleset 0 type replicated min_size 1 max_size 10 step take default step chooseleaf firstn 0 type host step emit } Should I gather anything else? Chad. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com