On Tue, Nov 8, 2016 at 9:37 AM, Dan Jakubiec <dan.jakubiec@xxxxxxxxx> wrote: > Hello, > > Picking the number of PGs for the CephFS data pool seems straightforward, but how does one do this for the metadata pool? > > Any rules of thumb or recommendations? I don't think we have any good ones yet. You've got to worry about the log and about the backing directory objects; depending on how your map looks I'd just try and get enough for a decent IO distribution across the disks you're actually using. Given the much lower amount of absolute data you're less worried about balancing the data precisely evenly and more concerned about not accidentally driving all IO to one of 7 disks because you have 8 PGs, and all your supposedly-parallel ops are contending. ;) -Greg > > Thanks, > > -- Dan Jakubiec > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com