Re: Any recommendations for CephFS metadata/data pool sizing?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Op 1 juli 2017 om 1:04 schreef Tu Holmes <tu.holmes@xxxxxxxxx>:
> 
> 
> I would use the calculator at ceph and just set for "all in one".
> 
> http://ceph.com/pgcalc/
> 

I wouldn't do that. With CephFS the data pool(s) will contain much more objects and data then the metadata pool.

You can easily have 1024 PGs for the metadata pool and 8192 for the data pool for example.

With the example of 512 PGs in total I'd assign 64 to the metadata pool and the rest to the data pool.

Wido

> 
> On Fri, Jun 30, 2017 at 6:45 AM Riccardo Murri <riccardo.murri@xxxxxxxxx>
> wrote:
> 
> > Hello!
> >
> > Are there any recommendations for how many PGs to allocate to a CephFS
> > meta-data pool?
> >
> > Assuming a simple case of a cluster with 512 PGs, to be distributed
> > across the FS data and metadata pools, how would you make the split?
> >
> > Thanks,
> > Riccardo
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux