On 05/17/2012 03:59 AM, Karol Jurak wrote:
How serious is such situation? Do the OSDs know how to handle it correctly? Or could this result in some data loss or corruption? After the recovery finished (ceph -w showed that all PGs are in active+clean state) I noticed that a few rbd images were corrupted.
As Sage mentioned, the OSDs know how to handle full journals correctly. I'd like to figure out how your rbd images got corrupted, if possible. How did you notice the corruption? Has your cluster always run 0.46, or did you upgrade from earlier versions? What happened to the cluster between your last check for corruption and now? Did your use of it or any ceph client or server configuration change? -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html