Re: CephFS - PG Count Question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 25, 2017 at 12:56 PM, James Wilkins
<James.Wilkins@xxxxxxxxxxxxx> wrote:
> Apologies if this is documented but I could not find any clear-cut advice
>
>
>
> Is it better to have a higher PG count for the metadata pool, or the data
> pool of a CephFS filesystem?
>
>
>
> If I look at
> http://www.slideshare.net/XiaoxiChen3/cephfs-jewel-mds-performance-benchmark
> - specfically slide 06 – I can see they used 32.768 for metadata and 8.192
> for data pool.

In general your metadata pool with have many fewer objects than your
data pool, so you can get away with a lower PG count.  If you target
SSDs for your metadata pool then you'd definitely want a lower pg num
to reflect the (likely lower) number of SSDs.

John

>
>
> Cheers,
>
>
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux