Hi,
I'm still not sure if we're on the same page.
By looking at
https://docs.ceph.com/en/latest/man/8/ceph-volume/#cmdoption-ceph-volume-lvm-prepare-block.db it seems that ceph-volume wants an LV or partition. So it's apparently not just taking a VG itself? Also if there were multiple VGs / devices , I likely would need to at least pick
those.
ceph-volume creates all required VGs/LVs automatically, and the OSD
creation happens in batch mode, for example when run by cephadm:
ceph-volume lvm batch --yes /dev/sdb /dev/sdc /dev/sdd
In a non-cephadm deployment you can fiddle with ceph-volume manually,
where you also can deploy single OSDs, with or without providing your
own pre-built VGs/LVs. In a cephadm deployment manually creating OSDs
will result in "stray daemons not managed by cephadm" warnings.
Before we upgraded to Pacific we did manage our block.db devices
manually with pre-built LVs, e.g.:
$ lvcreate -L 30G -n bluefsdb-30 ceph-journals
$ ceph-volume lvm create --data /dev/sdh --block.db ceph-journals/bluefsdb-30
Sorry for the confusion. I was not talking about any migrations,
just the initial creation of spinning rust OSDs with DB or WAL on
fast storage.
So the question is, is your cluster (or multiple clusters) managed
cephadm? If so, you don't need to worry about ceph-volume, it will be
handled for you in batch mode (you can inspect the ceph-volume.log
afterwards). You just need to provide a yaml file that fits your needs
with regards to block.db and data devices.
Zitat von Christian Rohmann <christian.rohmann@xxxxxxxxx>:
On 11.08.23 16:06, Eugen Block wrote:
if you deploy OSDs from scratch you don't have to create LVs
manually, that is handled entirely by ceph-volume (for example on
cephadm based clusters you only provide a drivegroup definition).
By looking at
https://docs.ceph.com/en/latest/man/8/ceph-volume/#cmdoption-ceph-volume-lvm-prepare-block.db it seems that ceph-volume wants an LV or partition. So it's apparently not just taking a VG itself? Also if there were multiple VGs / devices , I likely would need to at least pick
those.
But I suppose this orchestration would then require cephadm
(https://docs.ceph.com/en/latest/cephadm/services/osd/#drivegroups)
and cannot be done via ceph-volume which merely takes care of ONE
OSD at a time.
I'm not sure if automating db/wal migration has been considered, it
might be (too) difficult. But moving the db/wal devices to
new/different devices doesn't seem to be a reoccuring issue (corner
case?), so maybe having control over that process for each OSD
individually is the safe(r) option in case something goes wrong.
Sorry for the confusion. I was not talking about any migrations,
just the initial creation of spinning rust OSDs with DB or WAL on
fast storage.
Regards
Christian
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx