Silly question regarding PGs/per OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I guess this is an extremely silly question but...

I often read that the ideal PG/OSD ratio should be 100-200 PGs per OSD.
How is this calculated?

When I do "ceph -s" it correctly says I have 320 PGs in 5 pools.
However, this doesn't account for the replicas, does it?

I mean I have the following table which shows how many PGs of which pool
are stored on which osd.


pool :	1	2	3	4	5	| SUM
--------------------------------------------------------
osd.0	7	5	7	11	24	| 54
osd.1	16	18	15	37	26	| 112
osd.2	9	9	10	16	30	| 74
osd.3	6	12	9	11	23	| 61
osd.4	8	10	9	16	29	| 72
osd.5	10	14	9	17	17	| 67
osd.6	7	17	10	27	27	| 88
osd.7	11	17	11	22	38	| 99
osd.8	14	14	11	15	23	| 77
osd.9	8	12	5	20	19	| 64
--------------------------------------------------------
SUM :	96	128	96	192	256	|


If I sum up all last columns I am clearly above 320... So 100/OSD or
200/OSD: Does it mean *AFTER* taking into account all replicas?


Regards
Martin


-- 
"Things are only impossible until they're not"
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux