On 4 Jul 2012, at 19:59, Gregory Farnum wrote: > That's odd — there isn't too much that went into the OSD between 0.47 and 0.48 that I can think of, and most of that only impact OSDs when they go through bootup. What does ceph -s display — are all the PGs healthy? > -Greg > Hi Greg, The PGs all seem to be healthy: root@store1:~# ceph -s health HEALTH_OK monmap e1: 3 mons at {0=10.0.1.40:6789/0,1=10.0.1.41:6789/0,2=10.0.1.42:6789/0}, election epoch 40, quorum 0,1,2 0,1,2 osdmap e342: 7 osds: 7 up, 7 in pgmap v5403: 1344 pgs: 1344 active+clean; 4620 MB data, 9617 MB used, 1368 GB / 1377 GB avail mdsmap e50: 0/0/1 up -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html