On 8/31/2012 11:05 PM, Sage Weil wrote:
Sadly you can't yet adjust pg_num for an active pool. You can create a
new pool,
ceph osd pool create <name> <pg_num>
I would aim for 20 * num_osd, or thereabouts.. see
http://ceph.com/docs/master/ops/manage/grow/placement-groups/
Then you can copy the data from the old pool to the new one with
rados cppool yunio2 yunio3
This won't be particularly fast, but it will work. You can also do
ceph osd pool rename <oldname> <newname>
ceph osd pool delete <name>
I hope this solves your problem!
Looking at old archives, I found this thread which shows that to mount a
pool as cephfs, it needs to be added to mds:
http://permalink.gmane.org/gmane.comp.file-systems.ceph.devel/5685
I started a `rados cppool data tempstore` a couple hours ago. When it
finishes, will I need to remove the current pool from mds somehow(other
than just deleting the pool)?
Is `ceph mds add_data_pool <poolname>` still required? (It's not listed
in `ceph --help`.)
Thanks.
--
Andrew Thompson
http://aktzero.com/
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html