PG Splitting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello folks,

Is PG splitting considered stable now?  I feel like I used to see it
discussed all the time (and how it wasn't quite there), but haven't
heard anything about it in a while.  I remember seeing related bits in
release notes and such, but never an announcement that "you can now
increase the number of PGs in a pool".

I was thinking about this because I just deployed (successfully) a
small test cluster using ceph-deploy (first time I've gotten it to
work -- pretty smooth this time).  Since ceph-deploy has no idea how
many OSDs in total you are about to active/create, I suppose it has no
idea how to take a good guess at the number of PGs to set for the
"data" pool and kin.  So instead I just got 64 PGs per pool, which is
too low.

Can I just increase it with "ceph osd set..." now?

If not, would the best approach be to override the default in
ceph.conf in between "ceph-deploy new" and "ceph-deploy mon create" ?

Thanks,

 - Travis
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux