On Mon, May 21, 2018 at 3:22 AM, Philip Poten <philip.poten@xxxxxxxxx> wrote: > Hi, > > I managed to mess up the cache pool on an erasure coded cephfs: > > - I split pgs on the cache pool, and got a stray/unknown pg somehow > - added a second cache pool in the hopes that I'll be allowed to remove the > first, broken one > - and now have two broken/misconfigured cache pools and no working cephfs, > neither of which I'm allowed to remove > > I do not currently have the resources to set up a test cluster to try this > out first, and more than one cephfs seems to be an experimental feature. But > the data isn't crazy important, so: > > Can I delete a cephfs and recreate it with the contents intact just by using > the same data/metadata pools as before? > stop all ceph-mds daemons, including standby ones. ceph fs rm xxx --yes-i-really-mean-it ceph fs new xxx old_metadata old_data --force ceph fs reset xxx --yes-i-really-mean-it start ceph-mds daemons. > My gut says, I'll be ok, but are there any gotchas? > If you remove the cache pool, some data may get lost. > I could also resolve this by removing the cache/overlay tiers on that pool > and add a newly created one, but this seems to be impossible/prohibited. > > I'd be very grateful for any pointers! > > cheers, > Philip > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com