Re: Setting rbd_default_data_pool through the config store

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 29, 2020 at 9:03 AM Wido den Hollander <wido@xxxxxxxx> wrote:
>
>
>
> On 29/07/2020 14:54, Jason Dillaman wrote:
> > On Wed, Jul 29, 2020 at 6:23 AM Wido den Hollander <wido@xxxxxxxx> wrote:
> >>
> >> Hi,
> >>
> >> I'm trying to have clients read the 'rbd_default_data_pool' config
> >> option from the config store when creating a RBD image.
> >>
> >> This doesn't seem to work and I'm wondering if somebody knows why.
> >
> > It looks like all string-based config overrides for RBD are ignored:
> >
> > 2020-07-29T08:52:44.393-0400 7f2a97fff700  4 set_mon_vals failed to
> > set rbd_default_data_pool = rbd-data: Configuration option
> > 'rbd_default_data_pool' may not be modified at runtime
> >
> > librbd always accesses the config options in a thread-safe manner, so
> > I'll open a tracker ticket to flag all the RBD string config options
> > are runtime updatable (primitive data type options are implicitly
> > runtime updatable).
>
> I wasn't updating it at runtime, I just wanted to make sure that I don't
> have to set this in ceph.conf everywhere (and libvirt doesn't read
> ceph.conf)

You weren't updating it at runtime -- the MON's "MConfig" message back
to the client was attempting to set the config option after "rbd" had
already started. However, if it's working under python, perhaps there
is an easy tweak for "rbd" to have it delay flagging the application
as having started until after it has connected to the cluster. Right
now it manages its own CephContext lifetime which it re-uses when
creating a librados connection. It's that CephContext that is flagged
as "running" prior to librados actually connecting to the cluster.

> But it seems that Python works:
>
> #!/usr/bin/python3
>
> import rados
> import rbd
>
> cluster = rados.Rados(conffile='/etc/ceph/ceph.conf')
> cluster.connect()
> ioctx = cluster.open_ioctx('rbd')
>
> rbd_inst = rbd.RBD()
> size = 4 * 1024**3  # 4 GiB
> rbd_inst.create(ioctx, 'myimage', size)
>
> ioctx.close()
> cluster.shutdown()
>
>
> And then:
>
> $ ceph config set client rbd_default_data_pool rbd-data
>
> rbd image 'myimage':
>         size 4 GiB in 1024 objects
>         order 22 (4 MiB objects)
>         snapshot_count: 0
>         id: 1aa963a21028
>         data_pool: rbd-data
>         block_name_prefix: rbd_data.2.1aa963a21028
>         format: 2
>         features: layering, exclusive-lock, object-map, fast-diff,
> deep-flatten, data-pool
>
>
> I haven't tested this through libvirt yet. That's the next thing to test.
>
> Wido
>
> >
> >> I tried:
> >>
> >> $ ceph config set client rbd_default_data_pool rbd-data
> >> $ ceph config set global rbd_default_data_pool rbd-data
> >>
> >> They both show up under:
> >>
> >> $ ceph config dump
> >>
> >> However, newly created RBD images with the 'rbd' CLI tool do not use the
> >> data pool.
> >>
> >> If I set this in ceph.conf it works:
> >>
> >> [client]
> >> rbd_default_data_pool = rbd-data
> >>
> >> Somehow librbd isn't fetching these configuration options. Any hints on
> >> how to get this working?
> >>
> >> The end result is that libvirt (which doesn't read ceph.conf) should
> >> also be able to create RBD images with a different data pool.
> >>
> >> Wido
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
> >
> >
>


-- 
Jason
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux