Re: Rados gateway data-pool replacement.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Richard! Thanks a lot for your answer!

Indeed I’m soooo dumb…

I just performed a K/M migration on another service that involved RBD last
week and so just completely eluded the fact that you, of course, can change
the crush_rule of a pool without having to copy anything!!

OMFG… I can be so silly that it’s embarrassing…

thanks to you that just saved me weeks of data transfer as this pool is
105Tb large xD

Gosh…

Thanks again.

Le mer. 26 avr. 2023 à 04:35, Richard Bade <hitrich@xxxxxxxxx> a écrit :

> Hi Gaël,
> I'm actually embarking on a similar project to migrate EC pool from
> k=2,m=1 to k=4,m=2 using rgw multi site sync.
> I just thought I'd check before you do a lot of work for nothing that
> when you say failure domain that's the crush failure domain you mean,
> not k and m? If it is failure domain you mean I wonder if you realise
> that you can change the crush rule on an EC pool?
> You can change the rule the same as other pool types like this:
> sudo ceph osd pool set {pool_name} crush_rule {rule_name}
> At least that is my understanding and I have done so on a couple of my
> pools (changed from Host to Chassis failure domain).
> I found it a bit confusing in the docs because you can't change the EC
> profile of a pool due to k and m numbers and the crush rule is defined
> in the profile as well, but you can change that outside of the
> profile.
>
> Regards,
> Rich
>
> On Mon, 24 Apr 2023 at 20:55, Gaël THEROND <gael.therond@xxxxxxxxxxxx>
> wrote:
> >
> > Hi casey,
> >
> > I’ve tested that while you answered me actually :-)
> >
> > So, all in all, we can’t stop the radosgw for now and tier cache option
> > can’t work as we use EC based pools (at least for nautilus).
> >
> > Due to those constraints we’re currently thinking of the following
> > procedure:
> >
> > 1°/- Create the new EC Profile.
> > 2°/- Create the new EC based pool and assign it the new profile.
> > 3°/- Create a new storage class that use this new pool.
> > 4°/- Add this storage class to the default placement policy.
> > 5°/- Force a bucket lifecycle objects migration (possible??).
> >
> > It seems at least one user attempted to do just that in here:
> >
> >
> https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/RND652IBFIG6ESSQXVGNX7NAGCNEVYOU
> >
> > The only part of that thread that I don’t get is the:
> >
> > « I think actually moving an already-stored object requires a lifecycle
> > transition policy… » part of the Matt Benjamin answer.
> >
> > What kind of policy should I write to do that ??
> >
> > Is this procedure something that looks ok to you?
> >
> > Kind regards!
> >
> > Le mer. 19 avr. 2023 à 14:49, Casey Bodley <cbodley@xxxxxxxxxx> a écrit
> :
> >
> > > On Wed, Apr 19, 2023 at 5:13 AM Gaël THEROND <
> gael.therond@xxxxxxxxxxxx>
> > > wrote:
> > > >
> > > > Hi everyone, quick question regarding radosgw zone data-pool.
> > > >
> > > > I’m currently planning to migrate an old data-pool that was created
> with
> > > > inappropriate failure-domain to a newly created pool with appropriate
> > > > failure-domain.
> > > >
> > > > If I’m doing something like:
> > > > radosgw-admin zone modify —rgw-zone default —data-pool <new_pool>
> > > >
> > > > Will data from the old pool be migrated to the new one or do I need
> to do
> > > > something else to migrate those data out of the old pool?
> > >
> > > radosgw won't migrate anything. you'll need to use rados tools to do
> > > that first. make sure you stop all radosgws in the meantime so it
> > > doesn't write more objects to the old data pool
> > >
> > > > I’ve read a lot
> > > > of mail archive with peoples willing to do that but I can’t get a
> clear
> > > > answer from those archives.
> > > >
> > > > I’m running on nautilus release of it ever help.
> > > >
> > > > Thanks a lot!
> > > >
> > > > PS: This mail is a redo of the old one as I’m not sure the former one
> > > > worked (missing tags).
> > > > _______________________________________________
> > > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > >
> > >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux