Re: CephFS: number of PGs for metadata pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jan,

Thanks for the reply. I see your point about replicas. However my motivation was a bit different.

Consider some given amount of objects that are stored in the metadata pool.
If I understood correctly ceph data placement approach, the number of objects per PG should decrease with the amount of PGs per pool.

So my concern is that in catastrophic event of some PG(s) being lost I will loose more objects if the amount of PGs per pool is small. At the same time I don't want to have too few objects per PG to keep things disk IO, but not CPU bounded.

So I thought maybe somebody did some research in this direction?













On Wed, Dec 9, 2015 at 1:13 PM, Jan Schermer <jan@xxxxxxxxxxx> wrote:
Number of PGs doesn't affect the number of replicas, so don't worry about it. Jan
On 09 Dec 2015, at 13:03, Mykola Dvornik <mykola.dvornik@xxxxxxxxx> wrote: Hi guys, I am creating a 4-node/16OSD/32TB CephFS from scratch. According to the ceph documentation the metadata pool should have small amount of PGs since it contains some negligible amount of data compared to data pool. This makes me feel it might not be safe. So I was wondering how to chose the number of PGs per metadata pool to maintain its performance and reliability? Regards, Mykola _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux