Re: pgs per OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



(128*2+256*2+256*14+256*5)/15 =~ 375.

On Thursday, November 05, 2015 10:21:00 PM Deneau, Tom wrote:
> I have the following 4 pools:
> 
> pool 1 'rep2host' replicated size 2 min_size 1 crush_ruleset 1 object_hash
> rjenkins pg_num 128 pgp_num 128 last_change 88 flags hashpspool
> stripe_width 0 pool 17 'rep2osd' replicated size 2 min_size 1 crush_ruleset
> 1 object_hash rjenkins pg_num 256 pgp_num 256 last_change 154 flags
> hashpspool stripe_width 0 pool 20 'ec104osd' erasure size 14 min_size 10
> crush_ruleset 7 object_hash rjenkins pg_num 256 pgp_num 256 last_change 163
> flags hashpspool stripe_width 4160 pool 21 'ec32osd' erasure size 5
> min_size 3 crush_ruleset 6 object_hash rjenkins pg_num 256 pgp_num 256
> last_change 165 flags hashpspool stripe_width 4128
> 
> with 15 up osds.
> 
> and ceph health tells me I have too many PGs per OSD (375 > 300)
> 
> I'm not sure where the 375 comes from, since there are 896 pgs and 15 osds =
> approx. 60 pgs per OSD.
> 
> -- Tom
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux