Re: Upgrade from Giant 0.87-1 to Hammer 0.94-1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Also our calamari web UI won't authenticate anymore, can’t see any issues in any log under /var/log/calamari, any hints on what to look for are appreciated, TIA!

# dpkg -l | egrep -i calamari\|ceph
ii  calamari-clients                   1.2.3.1-2-gc1f14b2            all          Inktank Calamari user interface
ii  calamari-server                    1.3-rc-16-g321cd58            amd64        Inktank package containing the Calamari management srever
ii  ceph                               0.94.1-1~bpo70+1              amd64        distributed storage and file system
ii  ceph-common                        0.94.1-1~bpo70+1              amd64        common utilities to mount and interact with a ceph storage cluster
ii  ceph-deploy                        1.5.23~bpo70+1                all          Ceph-deploy is an easy to use configuration tool
ii  ceph-fs-common                     0.94.1-1~bpo70+1              amd64        common utilities to mount and interact with a ceph file system
ii  ceph-fuse                          0.94.1-1~bpo70+1              amd64        FUSE-based client for the Ceph distributed file system
ii  ceph-mds                           0.94.1-1~bpo70+1              amd64        metadata server for the ceph distributed file system
ii  curl                               7.29.0-1~bpo70+1.ceph         amd64        command line tool for transferring data with URL syntax
ii  libcephfs1                         0.94.1-1~bpo70+1              amd64        Ceph distributed file system client library
ii  libcurl3:amd64                     7.29.0-1~bpo70+1.ceph         amd64        easy-to-use client-side URL transfer library (OpenSSL flavour)
ii  libcurl3-gnutls:amd64              7.29.0-1~bpo70+1.ceph         amd64        easy-to-use client-side URL transfer library (GnuTLS flavour)
ii  libleveldb1:amd64                  1.12.0-1~bpo70+1.ceph         amd64        fast key-value storage library
ii  python-ceph                        0.94.1-1~bpo70+1              amd64        Meta-package for python libraries for the Ceph libraries
ii  python-cephfs                      0.94.1-1~bpo70+1              amd64        Python libraries for the Ceph libcephfs library
ii  python-rados                       0.94.1-1~bpo70+1              amd64        Python libraries for the Ceph librados library
ii  python-rbd                         0.94.1-1~bpo70+1              amd64        Python libraries for the Ceph librbd library


> On 16/04/2015, at 00.41, Steffen W Sørensen <stefws@xxxxxx> wrote:
> 
> Hi,
> 
> Successfully upgrade a small development 4x node Giant 0.87-1 cluster to Hammer 0.94-1, each node with 6x OSD - 146GB, 19 pools, mainly 2 in usage.
> Only minor thing now ceph -s complaining over too may PGs, previously Giant had complain of too few, so various pools were bumped up till health status was okay as before upgrading. Admit, that after bumping PGs up in Giant we had changed pool sizes from 3 to 2 & min 1 in fear of perf. when backfilling/recovering PGs.
> 
> 
> # ceph -s
>    cluster 16fe2dcf-2629-422f-a649-871deba78bcd
>     health HEALTH_WARN
>            too many PGs per OSD (1237 > max 300)
>     monmap e29: 3 mons at {0=10.0.3.4:6789/0,1=10.0.3.2:6789/0,2=10.0.3.1:6789/0}
>            election epoch 1370, quorum 0,1,2 2,1,0
>     mdsmap e142: 1/1/1 up {0=2=up:active}, 1 up:standby
>     osdmap e3483: 24 osds: 24 up, 24 in
>      pgmap v3719606: 14848 pgs, 19 pools, 530 GB data, 133 kobjects
>            1055 GB used, 2103 GB / 3159 GB avail
>               14848 active+clean
> 
> Can we just reduce PGs again and should we decrement in minor steps one pool at a time…
> 
> Any thoughts, TIA!
> 
> /Steffen
> 
> 
>> 1. restart the monitor daemons on each node
>> 2. then, restart the osd daemons on each node
>> 3. then, restart the mds daemons on each node
>> 4. then, restart the radosgw daemon on each node
>> 
>> Regards.
>> 
>> -- 
>> François Lafont
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux