On 11.08.23 16:06, Eugen Block wrote:
if you deploy OSDs from scratch you don't have to create LVs manually, that is handled entirely by ceph-volume (for example on cephadm based clusters you only provide a drivegroup definition).
By looking at https://docs.ceph.com/en/latest/man/8/ceph-volume/#cmdoption-ceph-volume-lvm-prepare-block.db it seems that ceph-volume wants an LV or partition. So it's apparently not just taking a VG itself? Also if there were multiple VGs / devices , I likely would need to at least pick those.
But I suppose this orchestration would then require cephadm (https://docs.ceph.com/en/latest/cephadm/services/osd/#drivegroups) and cannot be done via ceph-volume which merely takes care of ONE OSD at a time.
I'm not sure if automating db/wal migration has been considered, it might be (too) difficult. But moving the db/wal devices to new/different devices doesn't seem to be a reoccuring issue (corner case?), so maybe having control over that process for each OSD individually is the safe(r) option in case something goes wrong.
Sorry for the confusion. I was not talking about any migrations, just the initial creation of spinning rust OSDs with DB or WAL on fast storage.
Regards Christian _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx