Re: ceph-volume lvm deactivate/destroy/zap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 09, 2018 at 02:14:51PM -0500, Alfredo Deza wrote:
> On Tue, Jan 9, 2018 at 1:35 PM, Reed Dier <reed.dier@xxxxxxxxxxx> wrote:
> > I would just like to mirror what Dan van der Ster’s sentiments are.
> >
> > As someone attempting to move an OSD to bluestore, with limited/no LVM
> > experience, it is a completely different beast and complexity level compared
> > to the ceph-disk/filestore days.
> >
> > ceph-deploy was a very simple tool that did exactly what I was looking to
> > do, but now we have deprecated ceph-disk halfway into a release, ceph-deploy
> > doesn’t appear to fully support ceph-volume, which is now the official way
> > to manage OSDs moving forward.
> 
> ceph-deploy now fully supports ceph-volume, we should get a release soon
> 
> >
> > My ceph-volume create statement ‘succeeded’ but the OSD doesn’t start, so
> > now I am trying to zap the disk to try to recreate the OSD, and the zap is
> > failing as Dan’s did.
> 
> I would encourage you to open a ticket in the tracker so that we can
> improve on what failed for you
> 
> http://tracker.ceph.com/projects/ceph-volume/issues/new
> 
> ceph-volume keeps thorough logs in /var/log/ceph/ceph-volume.log and
> /var/log/ceph/ceph-volume-systemd.log
> 
> If you create a ticket, please make sure to add all the output and
> steps that you can
> >
> > And yes, I was able to get it zapped using the lvremove, vgremove, pvremove
> > commands, but that is not obvious to someone who hasn’t used LVM extensively
> > for storage management before.
> >
> > I also want to mirror Dan’s sentiments about the unnecessary complexity
> > imposed on what I expect is the default use case of an entire disk being
> > used. I can’t see anything more than the ‘entire disk’ method being the
> > largest use case for users of ceph, especially the smaller clusters trying
> > to maximize hardware/spend.
> 
> We don't take lightly the introduction of LVM here. The new tool is
> addressing several insurmountable issues with how ceph-disk operated.
> 
> Although using an entire disk might be easier in the use case you are
> in, it is certainly not the only thing we have to support, so then
> again, we can't
> reliably decide what strategy would be best to destroy that volume, or
> group, or if the PV should be destroyed as well.

wouldn't it be possible to detect on creation that it is a full physical
disk that gets initialized completely by ceph-volume, store that in the
metadata somewhere and clean up accordingly when destroying the OSD?

> 
> The 'zap' sub-command will allow that lv to be reused for an OSD and
> that should work. Again, if it isn't sufficient, we really do need
> more information and a
> ticket in the tracker is the best way.
> 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux