Re: ceph-volume lvm deactivate/destroy/zap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jan 8, 2018 at 4:37 PM, Alfredo Deza <adeza@xxxxxxxxxx> wrote:
> On Thu, Dec 21, 2017 at 11:35 AM, Stefan Kooman <stefan@xxxxxx> wrote:
>> Quoting Dan van der Ster (dan@xxxxxxxxxxxxxx):
>>> Thanks Stefan. But isn't there also some vgremove or lvremove magic
>>> that needs to bring down these /dev/dm-... devices I have?
>>
>> Ah, you want to clean up properly before that. Sure:
>>
>> lvremove -f <volume_group>/<logical_volume>
>> vgremove <volume_group>
>> pvremove /dev/ceph-device (should wipe labels)
>>
>> So ideally there should be a ceph-volume lvm destroy / zap option that
>> takes care of this:
>>
>> 1) Properly remove LV/VG/PV as shown above
>> 2) wipefs to get rid of LVM signatures
>> 3) dd zeroes to get rid of signatures that might still be there
>
> ceph-volume does have a 'zap' subcommand, but it does not remove
> logical volumes or groups. It is intended to leave those in place for
> re-use. It uses wipefs, but
> not in a way that would end up removing LVM signatures.
>
> Docs for zap are at: http://docs.ceph.com/docs/master/ceph-volume/lvm/zap/
>
> The reason for not attempting removal is that an LV might not be a
> 1-to-1 device to volume group. It is being suggested here to "vgremove
> <volume_group>"
> but what if the group has several other LVs that should not get
> removed? Similarly, what if the logical volume is not a single PV but
> many?
>
> We believe that these operations should be up to the administrator
> with better context as to what goes where and what (if anything)
> really needs to be removed
> from LVM.

Maybe I'm missing something, but aren't most (almost all?) use-cases just

   ceph-volume lvm create /dev/<thewholedisk>

? Or do you expect most deployments to do something more complicated with lvm?

In that above whole-disk case, I think it would be useful to have a
very simple cmd to tear down whatever ceph-volume created, so that
ceph admins don't need to reverse engineer what ceph-volume is doing
with lvm.

Otherwise, perhaps it would be useful to document the expected normal
lifecycle of an lvm osd: create, failure / replacement handling,
decommissioning.

Cheers, Dan



>
>>
>> Gr. Stefan
>>
>> --
>> | BIT BV  http://www.bit.nl/        Kamer van Koophandel 09090351
>> | GPG: 0xD14839C6                   +31 318 648 688 / info@xxxxxx
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux