Re: Radosgw replicated -> EC pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



A while ago I moved from a replicated pool to a EC pool using this
procedure (a down of the service during the data migration was acceptable
in my case):



# Stop rgw on all instances
systemctl stop ceph-radosgw.target

# Create the new EC pool for rgw data
ceph osd pool create cloudprod.rgw.buckets.data.new 32 32 erasure
profile-4-2

# copy data from old pool to the new one
rados cppool cloudprod.rgw.buckets.data cloudprod.rgw.buckets.data.new

# Rename the pools
ceph osd pool rename cloudprod.rgw.buckets.data
cloudprod.rgw.buckets.data.old
ceph osd pool rename cloudprod.rgw.buckets.data.new
cloudprod.rgw.buckets.data

# Application setting
 ceph osd pool application enable cloudprod.rgw.buckets.data rgw

# Delete old pool
ceph tell mon.\* injectargs '--mon-allow-pool-delete=true'
ceph osd pool delete cloudprod.rgw.buckets.data.old
cloudprod.rgw.buckets.data.old --yes-i-really-really-mean-it
ceph tell mon.\* injectargs '--mon-allow-pool-delete=false'

# Restart all rgw instances
systemctl start ceph-radosgw.target


Cheers, Massimo


On Tue, Jan 2, 2024 at 6:02 PM Jan Kasprzak <kas@xxxxxxxxxx> wrote:

>         Hello, Ceph users,
>
> what is the best way how to change the storage layout of all buckets
> in radosgw?
>
> I have default.rgw.buckets.data pool as replicated, and I want to use
> an erasure-coded layout instead. One way is to use cache tiering
> as described here:
>
> https://cephnotes.ksperis.com/blog/2015/04/15/ceph-pool-migration/
>
> Could this be done under the running radosgw? If I read this correctly,
> it should be done, because radosgw is just another RADOS client.
>
> Another possible approach would be to create a new erasure-coded pool,
> a new zone placement, and set it as default. But how can I migrate
> the existing data? If I understand it correctly, the default placement
> applies only to new buckets.
>
> Something like this:
>
> ceph osd erasure-code-profile set k5m2 k=5 m=2
> ceph osd pool create default.rgw.buckets.ecdata erasure k5m2
> ceph osd pool application enable default.rgw.buckets.ecdata radosgw
>
> radosgw-admin zonegroup placement add --rgw-zonegroup default
> --placement-id ecdata-placement
> radosgw-admin zone placement add --rgw-zone default --placement-id
> ecdata-placement --data-pool default.rgw.buckets.ecdata --index-pool
> default.rgw.buckets.index --data-extra-pool default.rgw.buckets.non-ec
> radosgw-admin zonegroup placement default --rgw-zonegroup default
> --placement-id ecdata-placement
>
> How to continue from this point?
>
> And a secondary question: what purpose does the data-extra-pool serve?
>
> Thanks!
>
> -Yenya
>
> --
> | Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work | yenya.net - private}>
> |
> | https://www.fi.muni.cz/~kas/                        GPG: 4096R/A45477D5
> |
>     We all agree on the necessity of compromise. We just can't agree on
>     when it's necessary to compromise.                     --Larry Wall
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux