Mixing CEPH versions on new ceph nodes...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

>>But in reality (yum update or by using ceph-deploy install nodename....) - the package manager does restart ALL ceph services on that node by its own...

debian packages don't restart ceph services on package update, maybe it's a bug in rpm packaging ?


----- Mail original ----- 

De: "Andrija Panic" <andrija.panic at gmail.com> 
?: "Wido den Hollander" <wido at 42on.com> 
Cc: ceph-users at lists.ceph.com 
Envoy?: Dimanche 13 Juillet 2014 23:42:55 
Objet: Re: [ceph-users] Mixing CEPH versions on new ceph nodes... 


Hi Wido, 


you said previously: 
Upgrade the packages, but don't restart the daemons yet, then: 
1. Restart the mon leader 
2. Restart the two other mons 
3. Restart all the OSDs one by one 



But in reality (yum update or by using ceph-deploy install nodename....) - the package manager does restart ALL ceph services on that node by its own... 
So, I have upgraded - MON leader and 2 OSD on this 1st upgraded host were restarted, folowed by doing the same with other 2 servers (1 MON peon and 2 OSD per host). 


Is this perhaps a package (RPM) bug - restarting daemons automatically ? Since it makes sense to have all MONs updated first, and than OSD (and perhaps after that MDS if using it...) 



Upgraded to 0.80.3 release btw. 



Thanks for your help again. 
Andrija 





On 3 July 2014 15:21, Andrija Panic < andrija.panic at gmail.com > wrote: 



Thanks again a lot. 





On 3 July 2014 15:20, Wido den Hollander < wido at 42on.com > wrote: 

<blockquote>

On 07/03/2014 03:07 PM, Andrija Panic wrote: 

<blockquote>
Wido, 
one final question: 
since I compiled libvirt1.2.3 usinfg ceph-devel 0.72 - do I need to 
recompile libvirt again now with ceph-devel 0.80 ? 

Perhaps not smart question, but need to make sure I don't screw something... 



No, no need to. The librados API didn't change in case you are using RBD storage pool support. 

Otherwise it just talks to Qemu and that talks to librbd/librados. 

Wido 


<blockquote>

Thanks for your time, 
Andrija 


On 3 July 2014 14:27, Andrija Panic < andrija.panic at gmail.com 

<mailto: andrija.panic at gmail. com >> wrote: 

Thanks a lot Wido, will do... 

Andrija 


On 3 July 2014 13:12, Wido den Hollander < wido at 42on.com 

<mailto: wido at 42on.com >> wrote: 

On 07/03/2014 10:59 AM, Andrija Panic wrote: 

Hi Wido, thanks for answers - I have mons and OSD on each 
host... 
server1: mon + 2 OSDs, same for server2 and server3. 

Any Proposed upgrade path, or just start with 1 server and 
move along to 
others ? 


Upgrade the packages, but don't restart the daemons yet, then: 

1. Restart the mon leader 
2. Restart the two other mons 
3. Restart all the OSDs one by one 

I suggest that you wait for the cluster to become fully healthy 
again before restarting the next OSD. 

Wido 

Thanks again. 
Andrija 


On 2 July 2014 16:34, Wido den Hollander < wido at 42on.com 
<mailto: wido at 42on.com > 

<mailto: wido at 42on.com <mailto: wido at 42on.com >>> wrote: 

On 07/02/2014 04:08 PM, Andrija Panic wrote: 

Hi, 

I have existing CEPH cluster of 3 nodes, versions 
0.72.2 

I'm in a process of installing CEPH on 4th node, 
but now CEPH 
version is 
0.80.1 

Will this make problems running mixed CEPH versions ? 


No, but the recommendation is not to have this running 
for a very 
long period. Try to upgrade all nodes to the same 
version within a 
reasonable amount of time. 


I intend to upgrade CEPH on exsiting 3 nodes anyway ? 
Recommended steps ? 


Always upgrade the monitors first! Then to the OSDs one 
by one. 

Thanks 

-- 

Andrija Pani? 


______________________________ _____________________ 

ceph-users mailing list 
ceph-users at lists.ceph.com <mailto: ceph-users at lists.ceph. com > 
<mailto: ceph-users at lists.ceph. __com 
<mailto: ceph-users at lists.ceph. com >> 
http://lists.ceph.com/____ listinfo.cgi/ceph-users-ceph._ ___com 
< http://lists.ceph.com/__ listinfo.cgi/ceph-users-ceph._ _com > 



< http://lists.ceph.com/__ listinfo.cgi/ceph-users-ceph._ _com 
< http://lists.ceph.com/ listinfo.cgi/ceph-users-ceph. com >> 



-- 
Wido den Hollander 
42on B.V. 
Ceph trainer and consultant 

Phone: +31 (0)20 700 9902 
<tel:%2B31%20%280%2920%20700% 209902> 
<tel:%2B31%20%280%2920%20700%_ _209902> 
Skype: contact42on 
______________________________ _____________________ 

ceph-users mailing list 
ceph-users at lists.ceph.com <mailto: ceph-users at lists.ceph. com > 
<mailto: ceph-users at lists.ceph. __com 
<mailto: ceph-users at lists.ceph. com >> 
http://lists.ceph.com/____ listinfo.cgi/ceph-users-ceph._ ___com 
< http://lists.ceph.com/__ listinfo.cgi/ceph-users-ceph._ _com > 



< http://lists.ceph.com/__ listinfo.cgi/ceph-users-ceph._ _com 
< http://lists.ceph.com/ listinfo.cgi/ceph-users-ceph. com >> 




-- 

Andrija Pani? 



-- 
Wido den Hollander 
Ceph consultant and trainer 
42on B.V. 


Phone: +31 (0)20 700 9902 <tel:%2B31%20%280%2920%20700% 209902> 

Skype: contact42on 




-- 

Andrija Pani? 
------------------------------ -------- 
http://admintweets.com 
------------------------------ -------- 




-- 

Andrija Pani? 
------------------------------ -------- 
http://admintweets.com 
------------------------------ -------- 

</blockquote>




-- 
Wido den Hollander 
Ceph consultant and trainer 
42on B.V. 

Phone: +31 (0)20 700 9902 
Skype: contact42on 

</blockquote>




-- 



Andrija Pani? 
-------------------------------------- 
http://admintweets.com 
-------------------------------------- 
</blockquote>




-- 



Andrija Pani? 
-------------------------------------- 
http://admintweets.com 
-------------------------------------- 
_______________________________________________ 
ceph-users mailing list 
ceph-users at lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux