Hi,
On Fri, Mar 15, 2013 at 9:52 AM, Sebastien Han <sebastien.han@xxxxxxxxxxxx> wrote:
Hi,It's not recommended to use this command yet.
As a workaround you can do:$ ceph osd pool create <my-new-pool> <pg_num>
$ rados cppool <my-old-pool> <my-new-pool>
$ ceph osd pool delete <my-old-pool>
$ ceph osd pool rename <my-new-pool> <my-old-pool>
We've just done exactly this on the default pool data, and it leaves cephfs mounts in a hanging state. Is that expected?
Cheers, Dan
Cheers, Dan
––––
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood."
PHONE : +33 (0)1 49 70 99 72 – MOBILE : +33 (0)6 52 84 44 70
EMAIL : sebastien.han@xxxxxxxxxxxx – SKYPE : han.sbastien
ADDRESS : 10, rue de la Victoire – 75009 Paris
WEB : www.enovance.com – TWITTER : @enovanceOn Mar 15, 2013, at 9:27 AM, Marco Aroldi <marco.aroldi@xxxxxxxxx> wrote:Hi,
I have a new cluster with no data.
Now it has 44 osd and my goal is to increase in the next months to
reach a total of 88 osd.
My pgmap is:
pgmap v841: 8640 pgs: 8640 active+clean; 8730 bytes data, 1733 MB
used, 81489 GB / 81491 GB avail
2880 PG each for data, metadata and rbd pools
This value was set by mkcephfs
Chatting on IRC channel, it was told me to calculate 100 pg per Osd
and round to near power-of-2
So in my case would be 8192 PG for each pool, right?
My question:
Knowing to have to double the number of osd,
is it advisable to increase pg_num right now with the following commands?
ceph osd pool set data pg_num 8192 --allow-experimental-feature
ceph osd pool set metadata pg_num 8192 --allow-experimental-feature
ceph osd pool set rbd pg_num 8192 --allow-experimental-feature
Thanks
--
Marco Aroldi
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com