On Mon, Jun 24, 2013 at 1:37 PM, Travis Rhoden <trhoden@xxxxxxxxx> wrote: > Hello folks, > > Is PG splitting considered stable now? I feel like I used to see it > discussed all the time (and how it wasn't quite there), but haven't > heard anything about it in a while. I remember seeing related bits in > release notes and such, but never an announcement that "you can now > increase the number of PGs in a pool". > > I was thinking about this because I just deployed (successfully) a > small test cluster using ceph-deploy (first time I've gotten it to > work -- pretty smooth this time). Since ceph-deploy has no idea how > many OSDs in total you are about to active/create, I suppose it has no > idea how to take a good guess at the number of PGs to set for the > "data" pool and kin. So instead I just got 64 PGs per pool, which is > too low. > > Can I just increase it with "ceph osd set..." now? Well, I think Sam's still nervous but we've been running nightly stress tests on it for quite a while and he's been working on fixing bugs that turn up when you "undelete" (due to weird migration patterns) a PG after splitting it (ie, way down the line from issues with splitting), so yes — we declared it ready-to-go and removed the "yes-i-really-mean-it" flag a while ago. There's still no merging of course, but you can increase the values with "ceph osd set". > If not, would the best approach be to override the default in > ceph.conf in between "ceph-deploy new" and "ceph-deploy mon create" ? That would also work. -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com