Do I need to be worried about this? 2011-01-23 23:12:06.328866 log 2011-01-23 23:12:05.316993 osd1 192.168.1.11:6801/9447 45 : [ERR] 1.1 scrub osd0 missing 10000017737.00000000/head 2011-01-23 23:12:06.328866 log 2011-01-23 23:12:05.317429 osd1 192.168.1.11:6801/9447 46 : [ERR] 1.1 scrub stat mismatch, got 7/136 objects, 0/0 clones, 12356/8682277 bytes, 17/8550 kb. 2011-01-23 23:12:08.230768 pg v129643: 270 pgs: 262 active+clean, 8 active+clean+inconsistent; 877 GB data, 1707 GB used, 1320 GB / 3036 GB avail I would expect ceph to fix the inconsistent PGs at this point, but it just continues background scrubbing. Does inconsistent data get cleaned up automatically from other replicas, or is there something that I need to fix manually here? --Ravi -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html