Increasing the number of PGs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 Greetings,

 I've been testing Ceph on a cluster where I have added new OSDs
several times, to the point where I now have 46 OSDs and only a total
of 192 PGs per pool, which now seems rather far from what the
documentation suggests (about 100 per OSD in each pool).

 Some past messages on this list and some documentation suggest that
increasing pg_num on a cluster with data won't give good results and
that PG splitting is not functional at this time.

 Can anyone clarify if this is still the case as of v0.43 and/or v0.44,
or is it safe to increase pg_num on a pool with live data?

Thanks in advance

Best regards

Cláudio

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux