Hi Zhang, are you sure, that all your 20 osd's are up and in ? Please provide the complete output of ceph -s or better with detail flag. Thank you :-) -- Mit freundlichen Gruessen / Best regards Oliver Dzombic IP-Interactive mailto:info@xxxxxxxxxxxxxxxxx Anschrift: IP Interactive UG ( haftungsbeschraenkt ) Zum Sonnenberg 1-3 63571 Gelnhausen HRB 93402 beim Amtsgericht Hanau Geschäftsführung: Oliver Dzombic Steuer Nr.: 35 236 3622 1 UST ID: DE274086107 Am 22.03.2016 um 11:02 schrieb Zhang Qiang: > Hi all, > > I have 20 OSDs and 1 pool, and, as recommended by the > doc(http://docs.ceph.com/docs/master/rados/operations/placement-groups/), I > configured pg_num and pgp_num to 4096, size 2, min size 1. > > But ceph -s shows: > > HEALTH_WARN > 534 pgs degraded > 551 pgs stuck unclean > 534 pgs undersized > too many PGs per OSD (382 > max 300) > > Why the recommended value, 4096, for 10 ~ 50 OSDs doesn't work? And > what does it mean by "too many PGs per OSD (382 > max 300)"? If per OSD > has 382 PGs I would have had 7640 PGs. > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com