Ceph upgrade kraken -> luminous without deploy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 
I have updated a test cluster by just updating the rpm and issueing a 
ceph osd require-osd-release because it was mentioned in the status. Is 
there more you need to do?


- update on all nodes the packages
sed -i 's/Kraken/Luminous/g' /etc/yum.repos.d/ceph.repo
yum update

- then on each node first restart the monitor
systemctl restart ceph-mon@X

- then on each node restart the osds
ceph osd tree
systemctl restart ceph-osd@X

- then on each node restart the mds
systemctl restart ceph-mds@X

ceph osd require-osd-release luminous



-----Original Message-----
From: Hauke Homburg [mailto:hhomburg@xxxxxxxxxxxxxx] 
Sent: zondag 2 juli 2017 13:24
To: ceph-users@xxxxxxxxxxxxxx
Subject:  Ceph Cluster with Deeo Scrub Error

Hello,

Ich have Ceph Cluster with 5 Ceph Servers, Running unter CentOS 7.2 and 
ceph 10.0.2.5. All OSD running in a RAID6.
In this Cluster i have Deep Scrub Error:
/var/log/ceph/ceph-osd.6.log-20170629.gz:389 .356391 7f1ac4c57700 -1 
log_channel(cluster) log [ERR] : 1.129 deep-scrub 1 errors

This Line is the inly Line i Can find with the Error.

I tried to repair with withceph osd deep-scrub osd and ceph pg repair.
Both didn't fiy the error.

What can i do to repair the Error?

Regards

Hauke

--
www.w3-creative.de

www.westchat.de


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux