Re: Maximizing OSD to PG quantity

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In a 12 OSD setup, the following config is there:

           (OSDs * 100)
Total PGs = ----------
             pool size


So with 12 OSD's and a pool size of 2 replicas, this would equal Total PGs of 600 as per this url:

http://docs.ceph.com/docs/master/rados/operations/placement-groups/#preselection

Yet in the same page, at the top it says:

Between 10 and 50 OSDs set pg_num to 4096

Our use is for shared hosting so there are lots of small writes and reads. Which of these would be correct?

Also is it a simple process to update PGs on a live system without affecting service?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux