Re: Calamari server not working after upgrade 0.87-1 -> 0.94-1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, can you check on your ceph node

/var/log/salt/minion 

?

I have had some similar problem, I have need to remove

rm /etc/salt/pki/minion/minion_master.pub
/etc/init.d/salt-minion restart


(I don't known if "calamari-ctl clear" change the salt master key)


----- Mail original -----
De: "Steffen W Sørensen" <stefws@xxxxxx>
À: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Lundi 27 Avril 2015 14:50:56
Objet:  Calamari server not working after upgrade 0.87-1 ->	0.94-1

All, 

After successfully upgrading from Giant to Hammer, at first our Calamari server seems fine, showing the new too many PGs. then during/after removing/consolidating various pool, it failed to get updated, Haven’t been able to find any RC, I decided to flush the Postgress DB (calamari-ctl clear —yes-I-am-sure) and try to start all over (calamari-ctl initialize) restarting all nodes’ salt-mimion + diamond, only now I just see this on my dashboard: 



This appears to be the first time you have started Calamari and there are no clusters currently configured. 


4 Ceph servers are connected to Calamari, but no Ceph cluster has been created yet. Please use ceph-deploy to create a cluster; please see the Inktank Ceph Enterprise documentation for more details. 

salt key are still accepted: 
root@node1:/var/log/calamari# salt-key -L 
Accepted Keys: 
node1.<domain> 
node2.<domain> 
node3.<domain> 
node4.<domain> 
Unaccepted Keys: 
Rejected Keys: 

Our cluster is of course running fine: 

root@node1:/var/log/calamari# ceph -s 
cluster 16fe2dcf-2629-422f-a649-871deba78bcd 
health HEALTH_OK 
monmap e29: 3 mons at {0=10.0.3.4:6789/0,1=10.0.3.2:6789/0,2=10.0.3.1:6789/0} 
election epoch 1382, quorum 0,1,2 2,1,0 
mdsmap e152: 1/1/1 up {0=2=up:active}, 1 up:standby 
osdmap e3579: 24 osds: 24 up, 24 in 
pgmap v4646340: 3072 pgs, 3 pools, 913 GB data, 229 kobjects 
1824 GB used, 1334 GB / 3159 GB avail 
3072 active+clean 
client io 32524 B/s wr, 11 op/s 


Any hints appreciated… 

TIA! 

/Steffen 

_______________________________________________ 
ceph-users mailing list 
ceph-users@xxxxxxxxxxxxxx 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux