Thanks Subhachandra, That is good point but how do i calculate that PG based on size? On Thu, Aug 9, 2018 at 1:42 PM, Subhachandra Chandra <schandra@xxxxxxxxxxxx> wrote: > If pool1 is going to be much smaller than pool2, you may want more PGs in > pool2 for better distribution of data. > > > > > On Wed, Aug 8, 2018 at 12:40 AM, Sébastien VIGNERON > <sebastien.vigneron@xxxxxxxxx> wrote: >> >> The formula seems correct for a 100 pg/OSD target. >> >> >> > Le 8 août 2018 à 04:21, Satish Patel <satish.txt@xxxxxxxxx> a écrit : >> > >> > Thanks! >> > >> > Do you have any comments on Question: 1 ? >> > >> > On Tue, Aug 7, 2018 at 10:59 AM, Sébastien VIGNERON >> > <sebastien.vigneron@xxxxxxxxx> wrote: >> >> Question 2: >> >> >> >> ceph osd pool set-quota <poolname> max_objects|max_bytes <val> >> >> set object or byte limit on pool >> >> >> >> >> >>> Le 7 août 2018 à 16:50, Satish Patel <satish.txt@xxxxxxxxx> a écrit : >> >>> >> >>> Folks, >> >>> >> >>> I am little confused so just need clarification, I have 14 osd in my >> >>> cluster and i want to create two pool (pool-1 & pool-2) how do i >> >>> device pg between two pool with replication 3 >> >>> >> >>> Question: 1 >> >>> >> >>> Is this correct formula? >> >>> >> >>> 14 * 100 / 3 / 2 = 233 ( power of 2 would be 256) >> >>> >> >>> So should i give 256 PG per pool right? >> >>> >> >>> pool-1 = 256 pg & pgp >> >>> poo-2 = 256 pg & pgp >> >>> >> >>> >> >>> Question: 2 >> >>> >> >>> How do i set limit on pool for example if i want pool-1 can only use >> >>> 500GB and pool-2 can use rest of the space? >> >>> _______________________________________________ >> >>> ceph-users mailing list >> >>> ceph-users@xxxxxxxxxxxxxx >> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> >> >> >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com