On Thu, Jan 31, 2013 at 10:56 PM, Gregory Farnum <greg@xxxxxxxxxxx> wrote: > On Thu, Jan 31, 2013 at 10:50 AM, Andrey Korolyov <andrey@xxxxxxx> wrote: >> http://xdel.ru/downloads/ceph-log/rados-out.txt.gz >> >> >> On Thu, Jan 31, 2013 at 10:31 PM, Gregory Farnum <greg@xxxxxxxxxxx> wrote: >>> Can you pastebin the output of "rados -p rbd ls"? > > > Well, that sure is a lot of rbd objects. Looks like a tool mismatch or > a bug in whatever version you were using. Can you describe how you got > into this state, what versions of the servers and client tools you > used, etc? > -Greg That`s relatively fresh data moved into bare new cluster after couple of days of 0.56.1 release, and tool/daemons version kept consistently the same at any moment. All garbage data belongs to the same pool prefix(3.) on which I have put a bunch of VM` images lately, cluster may have been experienced split-brain problem for a short times during crash-tests with no workload at all and standard crash tests on osd removal/readdition during moderate workload. Killed osds have been returned before,at the time and after process of data rearrangement on ``osd down'' timeout. Is it possible to do a little clean somehow without pool re-creation? -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html