On Jan 15, 2014, Sage Weil <sage@xxxxxxxxxxx> wrote: > v0.75 291 files changed, 82713 insertions(+), 33495 deletions(-) > Upgrading > ~~~~~~~~~ I suggest adding: * All (replicated?) pools will likely fail scrubbing because the per-pool dirty object counts, introduced in 0.75, won't match. This inconsistency is cleared by a pg repair; unfortunately this is about as expensive as a a deep-scrub, and it's not automatically scheduled or retried, like scrubs and deep-scrubs. I suppose after the dirty counts are brought to sync, the next scrub won't find inconsistent counts again, but I haven't got to that point yet. What surprised me was the huge number of objects marked as dirty! It was at least 14k out of 70k objects in each data pool, and even more in metadata pools, but it's not like I have messed with this many objects recently. Could something be amiss there? -- Alexandre Oliva, freedom fighter http://FSFLA.org/~lxoliva/ You must be the change you wish to see in the world. -- Gandhi Be Free! -- http://FSFLA.org/ FSF Latin America board member Free Software Evangelist Red Hat Brazil Toolchain Engineer -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html