I tried increasing the number of metadata replicas from 2 to 3 on my test cluster with the following command: ceph osd pool set metadata size 3 Afterwards it appears that all the metadata placement groups switch to a degraded state and doesn't seem to be attempting to recover: 2013-01-08 14:49:37.352735 mon.0 [INF] pgmap v156393: 1920 pgs: 1280 active+clean, 640 active+degraded; 903 GB data, 1820 GB used, 2829 GB / 4650 GB avail; 1255/486359 degraded (0.258%) Does anything need to be done after increasing the number of replicas? Here's what the OSD tree looks like: root@a1:~# ceph osd tree dumped osdmap tree epoch 1303 # id weight type name up/down reweight -1 4.99557 pool default -3 4.99557 rack unknownrack -2 2.49779 host b1 0 0.499557 osd.0 up 1 1 0.499557 osd.1 up 1 2 0.499557 osd.2 up 1 3 0.499557 osd.3 up 1 4 0.499557 osd.4 up 1 -4 2.49779 host b2 5 0.499557 osd.5 up 1 6 0.499557 osd.6 up 1 7 0.499557 osd.7 up 1 8 0.499557 osd.8 up 1 9 0.499557 osd.9 up 1 Thanks, Bryan -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html