Re: How to pick the number of PGs for a CephFS metadata pool?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Greg, makes sense.

Our ceph cluster currently has 16 OSDs, each with an 8TB disk.

Sounds like 32 PGs at 3x replication might be a reasonable starting point?

Thanks,

-- Dan

> On Nov 8, 2016, at 14:02, Gregory Farnum <gfarnum@xxxxxxxxxx> wrote:
> 
> On Tue, Nov 8, 2016 at 9:37 AM, Dan Jakubiec <dan.jakubiec@xxxxxxxxx> wrote:
>> Hello,
>> 
>> Picking the number of PGs for the CephFS data pool seems straightforward, but how does one do this for the metadata pool?
>> 
>> Any rules of thumb or recommendations?
> 
> I don't think we have any good ones yet. You've got to worry about the
> log and about the backing directory objects; depending on how your map
> looks I'd just try and get enough for a decent IO distribution across
> the disks you're actually using. Given the much lower amount of
> absolute data you're less worried about balancing the data precisely
> evenly and more concerned about not accidentally driving all IO to one
> of 7 disks because you have 8 PGs, and all your supposedly-parallel
> ops are contending. ;)
> -Greg
> 
>> 
>> Thanks,
>> 
>> -- Dan Jakubiec
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux