increase pg num for .rgw.buckets

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have an existing bucket with about 2,5TB data on a single system with 24 OSD's. This data can be replaced from S3, but I'd rather not, as this takes a long time.
The pg-num for the .rgw.buckets pool turns out to be 8 by default, which we had not realised before.

The question is what to do now. I understand the options are:

0) Recreate the pool with the correct pg-num setting. This nukes the data, so not a real option.

1) Increase the pg-num on the existing pool using the experimental feature. This might possibly do something bad, but I'm not sure what.
2) Use pg split. I'm not sure if this even works, but it could probably cause problems
3) Create a new pool with the correct pg-num setting. Copy .rgw.buckets over, destroy original pool, rename new pool. I am not sure if this currently works with the gateway.

There have been developments in all three options, so I would like to know what my best strategy is. I'm currently on 0.56.4, but I'm willing to upgrade to solve this.
I can wait a while, as my OSD's haven't filled up yet, but I would like to fix this in the coming days.

Any advice is greatly appreciated.

Regards,
Joachim
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux