Re: pgs down after adding 260 OSDs & increasing PGs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Nick & Wido,

Many thanks for your helpful advice; our cluster has returned to HEALTH_OK

One caveat is that a small number of pgs remained at "activating".

By increasing mon_max_pg_per_osd from 500 to 1000 these few osds activated, allowing the cluster to rebalance fully.

i.e. this was needed
mon_max_pg_per_osd = 1000

once the cluster returned to HEALTH_OK the mon_max_pg_per_osd setting was removed.

again, many thanks...

Jake

On 29/01/18 13:07, Nick Fisk wrote:
Hi Jake,

I suspect you have hit an issue that me and a few others have hit in
Luminous. By increasing the number of PG's before all the data has
re-balanced, you have probably exceeded hard PG per OSD limit.

See this thread
https://www.spinics.net/lists/ceph-users/msg41231.html

Nick



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux