Upgrade from Giant 0.87-1 to Hammer 0.94-1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Successfully upgrade a small development 4x node Giant 0.87-1 cluster to Hammer 0.94-1, each node with 6x OSD - 146GB, 19 pools, mainly 2 in usage.
Only minor thing now ceph -s complaining over too may PGs, previously Giant had complain of too few, so various pools were bumped up till health status was okay as before upgrading. Admit, that after bumping PGs up in Giant we had changed pool sizes from 3 to 2 & min 1 in fear of perf. when backfilling/recovering PGs.


# ceph -s
    cluster 16fe2dcf-2629-422f-a649-871deba78bcd
     health HEALTH_WARN
            too many PGs per OSD (1237 > max 300)
     monmap e29: 3 mons at {0=10.0.3.4:6789/0,1=10.0.3.2:6789/0,2=10.0.3.1:6789/0}
            election epoch 1370, quorum 0,1,2 2,1,0
     mdsmap e142: 1/1/1 up {0=2=up:active}, 1 up:standby
     osdmap e3483: 24 osds: 24 up, 24 in
      pgmap v3719606: 14848 pgs, 19 pools, 530 GB data, 133 kobjects
            1055 GB used, 2103 GB / 3159 GB avail
               14848 active+clean

Can we just reduce PGs again and should we decrement in minor steps one pool at a time…

Any thoughts, TIA!

/Steffen


> 1. restart the monitor daemons on each node
> 2. then, restart the osd daemons on each node
> 3. then, restart the mds daemons on each node
> 4. then, restart the radosgw daemon on each node
> 
> Regards.
> 
> -- 
> François Lafont
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux