On 25.08.23 09:09, Eugen Block wrote:
I'm still not sure if we're on the same page.
Maybe not, I'll respond inline to clarify.
By looking at
https://docs.ceph.com/en/latest/man/8/ceph-volume/#cmdoption-ceph-volume-lvm-prepare-block.db
it seems that ceph-volume wants an LV or partition. So it's
apparently not just taking a VG itself? Also if there were multiple
VGs / devices , I likely would need to at least pick those.
ceph-volume creates all required VGs/LVs automatically, and the OSD
creation happens in batch mode, for example when run by cephadm:
ceph-volume lvm batch --yes /dev/sdb /dev/sdc /dev/sdd
In a non-cephadm deployment you can fiddle with ceph-volume manually,
where you also can deploy single OSDs, with or without providing your
own pre-built VGs/LVs. In a cephadm deployment manually creating OSDs
will result in "stray daemons not managed by cephadm" warnings.
1) I am mostly asking about an non-cephadm environment and would just
like to know if ceph-volume can also manage the VG of a DB/WAL device
that is used for multiple OSD and create the individual LVs which are
used for DB or WAL devices when creating a single OSD. Below you give an
example "before we upgraded to Pacific" in which you run lvcreate
manually. Is that not required anymore with >= Quincy?
2) Even with cephadm there is the "db_devices" as part of the
drivegroups. But the question remains if cephadm can use a single
db_device for multiple OSDs.
Before we upgraded to Pacific we did manage our block.db devices
manually with pre-built LVs, e.g.:
$ lvcreate -L 30G -n bluefsdb-30 ceph-journals
$ ceph-volume lvm create --data /dev/sdh --block.db
ceph-journals/bluefsdb-30
As asked and explained in the paragraph above, this is what I am
currently doing (lvcreate + ceph-volume lvm create). My question
therefore is, if ceph-volume (!) could somehow create this LV for the DB
automagically if I'd just give it a device (or existing VG)?
Thank you very much for your patience in clarifying and responding to my
questions.
Regards
Christian
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx