too many PGs per OSD (307 > max 300)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi list,

I just followed the placement group guide to set pg_num for the rbd pool.

  "
  Less than 5 OSDs set pg_num to 128
  Between 5 and 10 OSDs set pg_num to 512
  Between 10 and 50 OSDs set pg_num to 4096
  If you have more than 50 OSDs, you need to understand the tradeoffs and how to
  calculate the pg_num value by yourself
  For calculating pg_num value by yourself please take help of pgcalc tool
  "

Since I have 40 OSDs, so I set pg_num to 4096 according to the above
recommendation.

However, after configured pg_num and pgp_num both to 4096, I found that my
ceph cluster in **HEALTH_WARN** status, which does surprised me and still
confusing me.

>   cluster bf6fa9e4-56db-481e-8585-29f0c8317773
     health HEALTH_WARN
            too many PGs per OSD (307 > max 300)

I see the cluster also says "4096 active+clean" so it's safe, but I do not like
the HEALTH_WARN in anyway.

As I know(from ceph -s output), the recommended pg_num per OSD is [30, 300], any
other pg_num out of this range with bring cluster to HEALTH_WARN.

So what I would like to say: is the document misleading? Should we fix it?

-- 
Thanks,
Chengwei

Attachment: signature.asc
Description: Digital signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux