Re: ceph-volume lvm deactivate/destroy/zap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I would just like to mirror what Dan van der Ster’s sentiments are.

As someone attempting to move an OSD to bluestore, with limited/no LVM experience, it is a completely different beast and complexity level compared to the ceph-disk/filestore days.

ceph-deploy was a very simple tool that did exactly what I was looking to do, but now we have deprecated ceph-disk halfway into a release, ceph-deploy doesn’t appear to fully support ceph-volume, which is now the official way to manage OSDs moving forward.

My ceph-volume create statement ‘succeeded’ but the OSD doesn’t start, so now I am trying to zap the disk to try to recreate the OSD, and the zap is failing as Dan’s did.

And yes, I was able to get it zapped using the lvremove, vgremove, pvremove commands, but that is not obvious to someone who hasn’t used LVM extensively for storage management before.

I also want to mirror Dan’s sentiments about the unnecessary complexity imposed on what I expect is the default use case of an entire disk being used. I can’t see anything more than the ‘entire disk’ method being the largest use case for users of ceph, especially the smaller clusters trying to maximize hardware/spend.

Just wanted to piggy back this thread to echo Dan’s frustration.

Thanks,

Reed

On Jan 8, 2018, at 10:41 AM, Alfredo Deza <adeza@xxxxxxxxxx> wrote:

On Mon, Jan 8, 2018 at 10:53 AM, Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote:
On Mon, Jan 8, 2018 at 4:37 PM, Alfredo Deza <adeza@xxxxxxxxxx> wrote:
On Thu, Dec 21, 2017 at 11:35 AM, Stefan Kooman <stefan@xxxxxx> wrote:
Quoting Dan van der Ster (dan@xxxxxxxxxxxxxx):
Thanks Stefan. But isn't there also some vgremove or lvremove magic
that needs to bring down these /dev/dm-... devices I have?

Ah, you want to clean up properly before that. Sure:

lvremove -f <volume_group>/<logical_volume>
vgremove <volume_group>
pvremove /dev/ceph-device (should wipe labels)

So ideally there should be a ceph-volume lvm destroy / zap option that
takes care of this:

1) Properly remove LV/VG/PV as shown above
2) wipefs to get rid of LVM signatures
3) dd zeroes to get rid of signatures that might still be there

ceph-volume does have a 'zap' subcommand, but it does not remove
logical volumes or groups. It is intended to leave those in place for
re-use. It uses wipefs, but
not in a way that would end up removing LVM signatures.

Docs for zap are at: http://docs.ceph.com/docs/master/ceph-volume/lvm/zap/

The reason for not attempting removal is that an LV might not be a
1-to-1 device to volume group. It is being suggested here to "vgremove
<volume_group>"
but what if the group has several other LVs that should not get
removed? Similarly, what if the logical volume is not a single PV but
many?

We believe that these operations should be up to the administrator
with better context as to what goes where and what (if anything)
really needs to be removed
from LVM.

Maybe I'm missing something, but aren't most (almost all?) use-cases just

  ceph-volume lvm create /dev/<thewholedisk>

No

? Or do you expect most deployments to do something more complicated with lvm?


Yes, we do. For example dmcache, which to ceph-volume looks like a
plain logical volume, but it can be vary on how it is implemented
behind the scenes

In that above whole-disk case, I think it would be useful to have a
very simple cmd to tear down whatever ceph-volume created, so that
ceph admins don't need to reverse engineer what ceph-volume is doing
with lvm.

Right, that would work if that was the only supported way of dealing
with lvm. We aren't imposing this, we added it as a convenience if a
user did not want
to deal with lvm at all. LVM has a plethora of ways to create an LV,
and we don't want to either restrict users to our view of LVM or
attempt to understand all the many different
ways that may be and assume some behavior is desired (like removing a VG)


Otherwise, perhaps it would be useful to document the expected normal
lifecycle of an lvm osd: create, failure / replacement handling,
decommissioning.

Cheers, Dan





Gr. Stefan

--
| BIT BV  http://www.bit.nl/        Kamer van Koophandel 09090351
| GPG: 0xD14839C6                   +31 318 648 688 / info@xxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux