Hi,
so there currently is a section how to configure nova [0], but it
refers to the client side ceph.conf, not the rbd details in nova.conf
as Ilya already pointed out. I'll just add what I have in one of my
test clusters in the [livbirt] section of the nova.conf (we use it
identically in our production clusters):
[libvirt]
virt_type = kvm
live_migration_uri = "qemu+ssh://%s/system"
live_migration_flag = VIR_MIGRATE_UNDEFINE_SOURCE,
VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE
cpu_mode = host-passthrough
disk_cachemodes = network=writeback
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = <UUID>
Maybe leave out the non-rbd config options to only have a minimum conf
in the docs? It is common to have the cinder user configured for nova
as well because it requires access to both ephemeral disks and
persistent volumes (just mentioning that in case it's not commonly
known).
And this permission topic brings me to a thread [1] Christian Rohmann
brought up in the openstack-discuss mailing list. If it's not the
right place to bring this up, please ignore this section.
There have been changes regarding glance permissions and the
(openstack) docs are not consistent anymore, maybe someone from the
ceph team could assist and get them consistent again? CC'ed Christian
here as well.
The ceph docs don't mention any other permissions than for the images
pool, so the question is:
e) Instead of try and error on the "rados_*"-prefixed object
required, maybe it makes sense to have someone from Ceph look into
this to define which caps are actually required to allow for
list_children on RBD images with children in other pools?
@Christian: regarding auth caps this was the main question, right?
Thanks,
Eugen
[0] https://docs.ceph.com/en/latest/rbd/rbd-openstack/#configuring-nova
[1]
https://lists.openstack.org/archives/list/openstack-discuss@xxxxxxxxxxxxxxxxxxx/message/JVZHT4O45ZBMDEMLE7W6JFH5KXD3SL7F/
[2]
https://docs.ceph.com/en/latest/rbd/rbd-openstack/#setup-ceph-client-authentication
Zitat von Zac Dover <zac.dover@xxxxxxxxx>:
You guys can just respond here and I’ll add your responses to the docs.
Zac
Sent from [Proton Mail](https://proton.me/mail/home) for iOS
On Thu, Jan 25, 2024 at 05:52, Ilya Dryomov
<[idryomov@xxxxxxxxx](mailto:On Thu, Jan 25, 2024 at 05:52, Ilya
Dryomov <<a href=)> wrote:
On Wed, Jan 24, 2024 at 7:31 PM Eugen Block <eblock@xxxxxx> wrote:
We do like the separation of nova pools as well, and we also heavily
use ephemeral disks instead of boot-from-volume instances. One of the
reasons being that you can't detach a root volume from an instances.
It helps in specific maintenance cases, so +1 for keeping it in the
docs.
So it seems like instead of dropping mentions of vms pool, we should
expand "Configuring Nova" section where it says
In order to boot virtual machines directly from Ceph volumes, you
must configure the ephemeral backend for Nova.
with appropriate steps and /etc/nova/nova.conf snippet. I'm guessing
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
at a minimum?
Zitat or Eugen, do you want to suggest a precise edit based on your
working configuration for Zac to incorporate or perhaps even open a PR
directly?
Thanks,
Ilya
Zitat von Erik McCormick <emccormick@xxxxxxxxxxxxxxx>:
> On Wed, Jan 24, 2024 at 10:02 AM Murilo Morais <murilo@xxxxxxxxxxxxxx>
> wrote:
>
>> Good afternoon everybody!
>>
>> I have a question regarding the documentation... I was reviewing it and
>> realized that the "vms" pool is not being used anywhere in the configs.
>>
>> The first mention of this pool was in commit 2eab1c1 and, in
e9b13fa, the
>> configuration section of nova.conf was removed, but the pool
configuration
>> remained there.
>>
>> Would it be correct to ignore all mentions of this pool (I don't see any
>> use for it)? If so, it would be interesting to update the documentation.
>>
>> https://docs.ceph.com/en/latest/rbd/rbd-openstack/#create-a-pool
>
>
> The use of that "vms" pool is for Nova to directly store
"ephemeral" disks
> in ceph instead of on local disk. It used to be described in the
Ceph doc,
> but seems to no longer be there. It's still in the Redhat version [1]
> however. Wouldn't it be better to put that back instead of removing the
> creation of the vms pool from the docs? Maybe there's a good
reason we only
> want to boot instances into volumes now, but I'm not aware of it.
>
> [1] - Section 3.4.3 of
>
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-single/ceph_block_device_to_openstack_guide/index
>
> -Erik
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx