Re: ceph-volume lvm deactivate/destroy/zap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 10, 2018 at 2:10 AM, Fabian Grünbichler
<f.gruenbichler@xxxxxxxxxxx> wrote:
> On Tue, Jan 09, 2018 at 02:14:51PM -0500, Alfredo Deza wrote:
>> On Tue, Jan 9, 2018 at 1:35 PM, Reed Dier <reed.dier@xxxxxxxxxxx> wrote:
>> > I would just like to mirror what Dan van der Ster’s sentiments are.
>> >
>> > As someone attempting to move an OSD to bluestore, with limited/no LVM
>> > experience, it is a completely different beast and complexity level compared
>> > to the ceph-disk/filestore days.
>> >
>> > ceph-deploy was a very simple tool that did exactly what I was looking to
>> > do, but now we have deprecated ceph-disk halfway into a release, ceph-deploy
>> > doesn’t appear to fully support ceph-volume, which is now the official way
>> > to manage OSDs moving forward.
>>
>> ceph-deploy now fully supports ceph-volume, we should get a release soon
>>
>> >
>> > My ceph-volume create statement ‘succeeded’ but the OSD doesn’t start, so
>> > now I am trying to zap the disk to try to recreate the OSD, and the zap is
>> > failing as Dan’s did.
>>
>> I would encourage you to open a ticket in the tracker so that we can
>> improve on what failed for you
>>
>> http://tracker.ceph.com/projects/ceph-volume/issues/new
>>
>> ceph-volume keeps thorough logs in /var/log/ceph/ceph-volume.log and
>> /var/log/ceph/ceph-volume-systemd.log
>>
>> If you create a ticket, please make sure to add all the output and
>> steps that you can
>> >
>> > And yes, I was able to get it zapped using the lvremove, vgremove, pvremove
>> > commands, but that is not obvious to someone who hasn’t used LVM extensively
>> > for storage management before.
>> >
>> > I also want to mirror Dan’s sentiments about the unnecessary complexity
>> > imposed on what I expect is the default use case of an entire disk being
>> > used. I can’t see anything more than the ‘entire disk’ method being the
>> > largest use case for users of ceph, especially the smaller clusters trying
>> > to maximize hardware/spend.
>>
>> We don't take lightly the introduction of LVM here. The new tool is
>> addressing several insurmountable issues with how ceph-disk operated.
>>
>> Although using an entire disk might be easier in the use case you are
>> in, it is certainly not the only thing we have to support, so then
>> again, we can't
>> reliably decide what strategy would be best to destroy that volume, or
>> group, or if the PV should be destroyed as well.
>
> wouldn't it be possible to detect on creation that it is a full physical
> disk that gets initialized completely by ceph-volume, store that in the
> metadata somewhere and clean up accordingly when destroying the OSD?

When the OSD is created, we capture a lot of metadata about devices,
what goes were (even if the device changes names), and
what devices are part of an OSD. For example we can accurately tell if
a device is a Journal and what OSD is it associated with.

The removal of an LV and its corresponding VG is very destructive with
no way to revert, and even though we allow a simplistic approach of
creating the
VG and LV for you it doesn't necessarily mean that an operator will
want to have a VG fully destroyed when zapping an LV.

There are two use cases here:

1) An operator is redeploying and wants to completely remove the VG
(including the PV and LV), that may or may not have been created by
ceph-volume
2) An operator already has VGs and LVs in place and wants to reuse
them for an OSD - no need to destroy the underlying VG

We must support #2, but I see that there is a lot of users that would
like a more transparent removal of LVM-related devices like what
ceph-volume does when creating.

How about a flag that allows that behavior (although not enabled by
default) so that `zap` can destroy the LVM devices as well? So instead
of:

    ceph-volume lvm zap vg/lv

We would offer:

    ceph-volume lvm zap --destroy vg/lv

Which would get rid of the lv, vg, and pv as well


>
>>
>> The 'zap' sub-command will allow that lv to be reused for an OSD and
>> that should work. Again, if it isn't sufficient, we really do need
>> more information and a
>> ticket in the tracker is the best way.
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux