Hi,
1) I am mostly asking about an non-cephadm environment and would
just like to know if ceph-volume can also manage the VG of a DB/WAL
device that is used for multiple OSD and create the individual LVs
which are used for DB or WAL devices when creating a single OSD.
Below you give an example "before we upgraded to Pacific" in which
you run lvcreate manually. Is that not required anymore with >=
Quincy?
yes, ceph-volume handles the creation for you in both cases, multiple
OSDs as well as single OSDs, there are plenty of options available,
here's another example (Nautilus):
---snip---
test-node:~ # ceph-volume lvm batch /dev/sdb /dev/sdc --db-devices /dev/sdd
--> passed data devices: 2 physical, 0 LVM
--> relative data size: 1.0
--> passed block_db devices: 1 physical, 0 LVM
Total OSDs: 2
Type Path
LV Size % of device
----------------------------------------------------------------------------------------------------
data /dev/sdb
5.00 GB 100.00%
block_db /dev/sdd
2.50 GB 50.00%
----------------------------------------------------------------------------------------------------
data /dev/sdc
5.00 GB 100.00%
block_db /dev/sdd
2.50 GB 50.00%
--> The above OSDs would be created if the operation continues
--> do you want to proceed? (yes/no)
---snip---
Note that it calculated the db size automatically (I didn't specify a
db size, which I could have), on this node I have three 10GB disks.
You can create your individual layout and ceph-volume asks before
deploying (also when using the --report argument). And this has
nothing to do with Quincy, this worked that way since Luminous when
ceph-volume was introduced, IIRC, and the old ceph-disk utility was
deprecated with Mimic (https://docs.ceph.com/en/mimic/ceph-volume/).
You *can* but you *don't have to* create VGs/LVs before deploying.
Manually creating them was often used when operators didn't want to
consume an entire SSD just for ceph-volume, or for testing purposes,
whatever.
2) Even with cephadm there is the "db_devices" as part of the
drivegroups. But the question remains if cephadm can use a single
db_device for multiple OSDs.
Yes, it can. If your server only has one SSD and your drivegroup.yaml
reflects that, cephadm will use only one SSD for multiple HDDs. You
can --dry-run a drivegroup.yaml (ceph orch apply -i drivegroup.yaml
--dry-run) to see what cephadm would do with your specification.
As asked and explained in the paragraph above, this is what I am
currently doing (lvcreate + ceph-volume lvm create). My question
therefore is, if ceph-volume (!) could somehow create this LV for
the DB automagically if I'd just give it a device (or existing VG)?
Yes, as explained before, that's what it's developed for. ;-) If for
some reason you are required to use only parts of an SSD it makes
sense to create LVs manually, but it can be handled for you.
Does that clear things up a bit? Are we on the same page now? :-)
Regards,
Eugen
Zitat von Christian Rohmann <christian.rohmann@xxxxxxxxx>:
On 25.08.23 09:09, Eugen Block wrote:
I'm still not sure if we're on the same page.
Maybe not, I'll respond inline to clarify.
By looking at
https://docs.ceph.com/en/latest/man/8/ceph-volume/#cmdoption-ceph-volume-lvm-prepare-block.db it seems that ceph-volume wants an LV or partition. So it's apparently not just taking a VG itself? Also if there were multiple VGs / devices , I likely would need to at least pick
those.
ceph-volume creates all required VGs/LVs automatically, and the OSD
creation happens in batch mode, for example when run by cephadm:
ceph-volume lvm batch --yes /dev/sdb /dev/sdc /dev/sdd
In a non-cephadm deployment you can fiddle with ceph-volume
manually, where you also can deploy single OSDs, with or without
providing your own pre-built VGs/LVs. In a cephadm deployment
manually creating OSDs will result in "stray daemons not managed by
cephadm" warnings.
1) I am mostly asking about an non-cephadm environment and would
just like to know if ceph-volume can also manage the VG of a DB/WAL
device that is used for multiple OSD and create the individual LVs
which are used for DB or WAL devices when creating a single OSD.
Below you give an example "before we upgraded to Pacific" in which
you run lvcreate manually. Is that not required anymore with >=
Quincy?
2) Even with cephadm there is the "db_devices" as part of the
drivegroups. But the question remains if cephadm can use a single
db_device for multiple OSDs.
Before we upgraded to Pacific we did manage our block.db devices
manually with pre-built LVs, e.g.:
$ lvcreate -L 30G -n bluefsdb-30 ceph-journals
$ ceph-volume lvm create --data /dev/sdh --block.db
ceph-journals/bluefsdb-30
As asked and explained in the paragraph above, this is what I am
currently doing (lvcreate + ceph-volume lvm create). My question
therefore is, if ceph-volume (!) could somehow create this LV for
the DB automagically if I'd just give it a device (or existing VG)?
Thank you very much for your patience in clarifying and responding
to my questions.
Regards
Christian
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx