Increase number of pg in running system

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Looking at:
http://ceph.com/docs/master/rados/operations/pools/

It has this description roughly in the middle:

---------------
Important
Increasing the number of placement groups in a pool after you create
the pool is still an experimental feature in Bobtail (v 0.56). We
recommend defining a reasonable number of placement groups and
maintaining that number until Ceph’s placement group splitting and
merging functionality matures.
---------------

However, I cannot find any references how to do this?

I'm asking since we have a test system with 10TB data with only the
default 8 PG's created.

So currently the system is throwing 400GB pg's around whenever we test
removing disks.

The system sometimes wants to put 2x pg's on an OSD which cannot hold
it and it winds up with cluster full. If there is ~750GB free on a OSD
it might decide to put 2x 400GB pgs on it (even though there are other
OSD with even more disk free than that - these disks are exact same
type, size and weights).

System is running bobcat 0.56.2

The system holds 2 big rbd images and it is not an option to create a
new pool with higher pg count and copy them (not enough total space
available).

Thanks in advance,

Cheers,
Martin
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux