On 29/07/2020 16:54, Wido den Hollander wrote:
On 29/07/2020 16:00, Jason Dillaman wrote:
On Wed, Jul 29, 2020 at 9:07 AM Jason Dillaman <jdillama@xxxxxxxxxx>
wrote:
On Wed, Jul 29, 2020 at 9:03 AM Wido den Hollander <wido@xxxxxxxx>
wrote:
On 29/07/2020 14:54, Jason Dillaman wrote:
On Wed, Jul 29, 2020 at 6:23 AM Wido den Hollander <wido@xxxxxxxx>
wrote:
Hi,
I'm trying to have clients read the 'rbd_default_data_pool' config
option from the config store when creating a RBD image.
This doesn't seem to work and I'm wondering if somebody knows why.
It looks like all string-based config overrides for RBD are ignored:
2020-07-29T08:52:44.393-0400 7f2a97fff700 4 set_mon_vals failed to
set rbd_default_data_pool = rbd-data: Configuration option
'rbd_default_data_pool' may not be modified at runtime
librbd always accesses the config options in a thread-safe manner, so
I'll open a tracker ticket to flag all the RBD string config options
are runtime updatable (primitive data type options are implicitly
runtime updatable).
I wasn't updating it at runtime, I just wanted to make sure that I
don't
have to set this in ceph.conf everywhere (and libvirt doesn't read
ceph.conf)
You weren't updating it at runtime -- the MON's "MConfig" message back
to the client was attempting to set the config option after "rbd" had
already started. However, if it's working under python, perhaps there
is an easy tweak for "rbd" to have it delay flagging the application
as having started until after it has connected to the cluster. Right
now it manages its own CephContext lifetime which it re-uses when
creating a librados connection. It's that CephContext that is flagged
as "running" prior to librados actually connecting to the cluster.
It looks like this is caused by two issues:
-- In [1], this will prevent librados from applying any MON config
overrides (for strings). This line can just be trivially removed.
-- Fixing that, there is a race in librados / MonClient [2] where it
attempts to first pull the config from the MONs, but it uses a
separate thread to actually apply the received config values, which
can race w/ the completion of the bootstrap occurring in the main
thread. This means that the example below may work sometimes -- and
may fail other times.
Interesting! In this case it will be libvirt which runs for ever and
talks to librbd/librados.
I'll need to see how that works out. I'll test and report back.
I can confirm this works with Libvirt. I created a RBD volume through
Libvirt's RBD storage driver and this resulted in the 'data-pool'
feature set and the RBD image using the data pool.
On the hypervisor where libvirt runs no ceph.conf is present. All
information is provided through Libvirt's XML definitions which only
contain the Monitors and the Cephx credentials.
In this case librados/librbd fetched the configuration from the Config
Store and thus detected it needed to use the data pool feature.
I'll keep an eye out to see if this goes wrong and it by accident
creates an image without this feature.
Running 15.2.4 in this case on Ubuntu 18.04
Wido
Wido
But it seems that Python works:
#!/usr/bin/python3
import rados
import rbd
cluster = rados.Rados(conffile='/etc/ceph/ceph.conf')
cluster.connect()
ioctx = cluster.open_ioctx('rbd')
rbd_inst = rbd.RBD()
size = 4 * 1024**3 # 4 GiB
rbd_inst.create(ioctx, 'myimage', size)
ioctx.close()
cluster.shutdown()
And then:
$ ceph config set client rbd_default_data_pool rbd-data
rbd image 'myimage':
size 4 GiB in 1024 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 1aa963a21028
data_pool: rbd-data
block_name_prefix: rbd_data.2.1aa963a21028
format: 2
features: layering, exclusive-lock, object-map, fast-diff,
deep-flatten, data-pool
I haven't tested this through libvirt yet. That's the next thing to
test.
Wido
I tried:
$ ceph config set client rbd_default_data_pool rbd-data
$ ceph config set global rbd_default_data_pool rbd-data
They both show up under:
$ ceph config dump
However, newly created RBD images with the 'rbd' CLI tool do not
use the
data pool.
If I set this in ceph.conf it works:
[client]
rbd_default_data_pool = rbd-data
Somehow librbd isn't fetching these configuration options. Any
hints on
how to get this working?
The end result is that libvirt (which doesn't read ceph.conf) should
also be able to create RBD images with a different data pool.
Wido
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
--
Jason
[1] https://github.com/ceph/ceph/blob/master/src/tools/rbd/Utils.cc#L680
[2] https://github.com/ceph/ceph/blob/master/src/mon/MonClient.cc#L445
--
Jason
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx