Re: Erasure coding RBD pool for OpenStack

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Max,

Would you mind to share some config examples? What happen if we create the
instance which boot with newly created or existing volume?

Best regards,


On Fri, Aug 28, 2020 at 5:27 PM Max Krasilnikov <pseudo@xxxxxxxxxxxxx>
wrote:

> Hello!
>
>  Fri, Aug 28, 2020 at 04:05:55PM +0700, mrxlazuardin wrote:
>
> > Hi Konstantin,
> >
> > I hope you or anybody still follows this old thread.
> >
> > Can this EC data pool be configured per pool, not per client? If we
> follow
> > https://docs.ceph.com/docs/master/rbd/rbd-openstack/ we may see that
> cinder
> > client will access vms and volumes pools, both with read and write
> > permission. How can we handle this?
> >
> > If we config with different clients for nova (vms) and cinder (volumes),
> I
> > think there will be a problem if there is cross pool access, especially
> on
> > write. Let's say that client nova will create volume on instance creation
> > for booting from that volume. Any thoughts?
>
> As of this docs, nova will use pools as client.cinder user. When using
> replicated + erasure pools with cinder I have created two different users
> for
> them and two different backends in in cinder.conf for the same cluster with
> different credentials as rbd_default_data_pool may be set only per-user in
> ceph.conf. So it was 2 different rdb uids installed in libvirt and 2
> different
> volume types in cinder.
>
> As I understand, you need something like my setup.
>
> >
> > Best regards,
> >
> >
> > > Date: Wed, 11 Jul 2018 11:16:27 +0700
> > > From: Konstantin Shalygin <k0ste@xxxxxxxx>
> > > To: ceph-users@xxxxxxxxxxxxxx
> > > Subject: Re:  Erasure coding RBD pool for OpenStack
> > >         Glance, Nova and Cinder
> > > Message-ID: <069ac368-22b0-3d18-937b-70ce39287cb1@xxxxxxxx>
> > > Content-Type: text/plain; charset=utf-8; format=flowed
> > >
> > > > So if you want, two more questions to you :
> > > >
> > > > - How do you handle your ceph.conf configuration (default data pool
> by
> > > > user) / distribution ? Manually, config management,
> openstack-ansible...
> > > > ?
> > > > - Did you made comparisons, benchmarks between replicated pools and
> EC
> > > > pools, on the same hardware / drives ? I read that small writes are
> not
> > > > very performant with EC.
> > >
> > >
> > > ceph.conf with default data pool is only need for Cinder at image
> > > creation time, after this luminous+ rbd client will be found feature
> > > "data-pool" and will perform data-io to this pool.
> > >
> > >
> > > > # rbd info
> > > > erasure_rbd_meta/volume-09ed44bf-7d16-453a-b712-a636a0d3d812 <-----
> > > > meta pool !
> > > > rbd image 'volume-09ed44bf-7d16-453a-b712-a636a0d3d812':
> > > > ??????? size 1500 GB in 384000 objects
> > > > ??????? order 22 (4096 kB objects)
> > > > ??????? data_pool: erasure_rbd_data??????? <----- our data pool
> > > > ??????? block_name_prefix: rbd_data.6.a2720a1ec432bf
> > > > ??????? format: 2
> > > > ??????? features: layering, exclusive-lock, object-map, fast-diff,
> > > > deep-flatten, data-pool????????? <----- "data-pool" feature
> > > > ??????? flags:
> > > > ??????? create_timestamp: Sat Jan 27 20:24:04 2018
> > >
> > >
> > >
> > > k
> > >
> > >
> > >
> > > ------------------------------
> > >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux