Re: Deleting a CephFS volume

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Patrick,

On 5/22/23 22:00, Patrick Donnelly wrote:
Hi Conrad,

On Wed, May 17, 2023 at 2:41 PM Conrad Hoffmann <ch@xxxxxxxxxxxxx> wrote:

On 5/17/23 18:07, Stefan Kooman wrote:
On 5/17/23 17:29, Conrad Hoffmann wrote:
Hi all,

I'm having difficulties removing a CephFS volume that I set up for
testing. I've been through this with RBDs, so I do know about
`mon_allow_pool_delete`. However, it doesn't help in this case.

It is a cluster with 3 monitors. You can find a console log of me
verifying that `mon_allow_pool_delete` is indeed true on all monitors
but still fail to remove the volume here:

That's not just a volume, that's the whole filesystem. If that's what
you want to do ... I see the MDS daemon is still up. IIRC there should
be no MDS running if you want to delete the fs. Can you stop the MDS
daemon and try again.

That sort of got me in the right direction, but I am still confused. I
don't think I understand the difference between a volume and a
filesystem. I think I followed [1] when I set this up. It says to use
`ceph fs volume create`. I went ahead and ran it again, and it certainly
creates something that shows up in both `ceph fs ls` and `ceph fs volume
ls`. Also, [2] says "FS volumes, an abstraction for CephFS file
systems", so I guess they are the same thing?

Yes.

At any rate, shutting down the MDS did _not_ help with `ceph fs volume
rm` (it failed with the same error message), but it _did_ help with
`ceph fs rm`, which then worked. Hard to make sense of, but I am pretty
sure the error message I was seeing is pretty non-sensical in that
context. Under what circumstance will `ceph fs volume rm` even work if
it fails to delete a volume I just created?

`fs rm` just removes the file system from the monitor maps. You still
have the data pools lying around which is what the `volume rm` command
is complaining about.

Try:

ceph config set global mon_allow_pool_delete true
ceph fs volume rm ...

Thanks a lot for the clarifications. It's starting to make sense now. I still cannot quite explain the behavior I observed earlier, but I also cannot reproduce it anymore, so there must have been something else amiss. I can confirm that I can now properly delete an FS including the data and meta data pools with `ceph fs volume rm` (even if the the MDS is up). The only caveat is that

ceph config set global mon_allow_pool_delete true

does not work for me, I had to use:

ceph tell mon.* injectargs --mon_allow_pool_delete true

It also seems that `ceph fs volume rm` might have already deleted the FS when it fails to remove the storage pools, which makes sense, but may have contributed to my confusion earlier as I didn't quite understand what it was trying to achieve.

Thanks again for the help!

Conrad
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux