Migrating from block to lvm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Greetings all!

 

I am looking at upgrading to Nautilus in the near future (currently on Mimic). We have a cluster built on 480 OSDs all using multipath and simple block devices. I see that the ceph-disk tool is now deprecated and the ceph-volume tool doesn’t do everything that ceph-disk did for simple devices (e.g. I’m unable to activate a new osd and set the location of wal/block.db, so far as I have been able to figure out). So for disk replacements going forward it could get ugly.

 

We deploy/manage using Ceph Ansible.

 

I’m okay with updating the OSDs to LVM and understand that it will require a full rebuild of each OSD.

 

I was thinking of going OSD by OSD through the cluster until they are all completed. However, someone suggested doing an entire node at a time (that would be 20 OSDs at a time in this case). Is one method going to be better than the other?

 

Also a question about setting-up LVM: given I’m using multipath devices, do I have to preconfigure the LVM devices before running the ansible plays or will ansible take care of the LVM setup (even though they are on multipath)?

 

I would then do the upgrade to Nautilus from Mimic after all the OSDs were converted.

 

I’m looking for opinions on best practices to complete this as I’d like to minimize impact to our clients.

 

Cheers,

Mike Cave

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux