Re: Upgrade from Giant 0.87-1 to Hammer 0.94-1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Thu, 16 Apr 2015 00:41:29 +0200 Steffen W Sørensen wrote:

> Hi,
> 
> Successfully upgrade a small development 4x node Giant 0.87-1 cluster to
> Hammer 0.94-1, each node with 6x OSD - 146GB, 19 pools, mainly 2 in
> usage. Only minor thing now ceph -s complaining over too may PGs,
> previously Giant had complain of too few, so various pools were bumped
> up till health status was okay as before upgrading. Admit, that after
> bumping PGs up in Giant we had changed pool sizes from 3 to 2 & min 1 in
> fear of perf. when backfilling/recovering PGs.
>

That later change would have _increased_ the number of recommended PG, not
decreased it.

With your cluster 2048 PGs total (all pools combined!) would be the sweet
spot, see:

http://ceph.com/pgcalc/
 
It seems to me that you increased PG counts assuming that the formula is
per pool.

> 
> # ceph -s
>     cluster 16fe2dcf-2629-422f-a649-871deba78bcd
>      health HEALTH_WARN
>             too many PGs per OSD (1237 > max 300)
>      monmap e29: 3 mons at
> {0=10.0.3.4:6789/0,1=10.0.3.2:6789/0,2=10.0.3.1:6789/0} election epoch
> 1370, quorum 0,1,2 2,1,0 mdsmap e142: 1/1/1 up {0=2=up:active}, 1
> up:standby osdmap e3483: 24 osds: 24 up, 24 in
>       pgmap v3719606: 14848 pgs, 19 pools, 530 GB data, 133 kobjects
>             1055 GB used, 2103 GB / 3159 GB avail
>                14848 active+clean
> 

This is an insanely high PG count for this cluster and is certain to
impact performance and resource requirements (all these PGs need to peer
after all).

> Can we just reduce PGs again and should we decrement in minor steps one
> pool at a time…
> 
No, as per the documentation you can only increase PGs and PGPs.

So your options are to totally flatten this cluster or if pools with
important data exist to copy them to new, correctly sized, pools and
delete all the oversized ones after that.

Christian



-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux