Re: Hammer to Jewel upgrade questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If you are unfamiliar with using the systemd syntax of `systemctl restart ceph.target` or `systemd restart ceph-osd@12`, then I would recommend upgrading the cluster first, get it working, then upgrade your OS when you have some time to play with it and figure out how to use systemd instead of upstart.  You will also need to change the file that is touched on each of your daemons from upstart to systemd in each OSD, mon, mds, etc.  If you are comfortable using systemd, then go ahead and upgrade the OS first and get the cluster working with systemd before upgrading to jewel.

On Wed, May 17, 2017 at 2:25 AM Frédéric Nass <frederic.nass@xxxxxxxxxxxxxxxx> wrote:


----- Le 16 Mai 17, à 20:43, Shain Miley <smiley@xxxxxxx> a écrit :
Hello,

I am going to be upgrading our production Ceph cluster from
Hammer/Ubuntu 14.04 to Jewel/Ubuntu 16.04 and I wanted to ask a question
and sanity check my upgrade plan.

Here are the steps I am planning to take during the upgrade:
Hi Shain,

0) upgrade operating system packages first and reboot on new kernel if needed.


1)Upgrade to latest hammer on current cluster
2)Remove or rename the existing ‘ceph’ user and ‘ceph’ group on each node
3)Upgrade the ceph packages to latest Jewel (mon, then osd, then rbd
clients)
You might want to upgrade the RBD clients first. This may not be a mandatory step but a careful one.


4)stop ceph daemons
5)change permissions on ceph directories and osd journals:

find /var/lib/ceph/osd -maxdepth 1 -mindepth 1 -type d|parallel chown -R
64045:64045
chown 64045:64045 /var/lib/ceph
chown 64045:64045 /var/lib/ceph/*
chown 64045:64045 /var/lib/ceph/bootstrap-*/*

for ID in $(ls /var/lib/ceph/osd/|cut -d '-' -f 2); do
     JOURNAL=$(readlink -f /var/lib/ceph/osd/ceph-${ID}/journal)
     chown ceph ${JOURNAL}
You can avoid this step by adding setuser_match_path = /var/lib/ceph/$type/$cluster-$id to the [osd] section. This will make the Ceph daemons run as root if the daemon’s data directory is still owned by root.
Newly deployed daemons will be created with data owned by user ceph and will run with reduced privileges, but upgraded daemons will continue to run as root.

Or you can still change the property of the files to ceph but it might be long depending on the number of objects and PGs you have in your cluster, for an average zero benefit, especially since when bluestore comes out, you'll recreate all these datas.

6)restart ceph daemons

The two questions I have are:
1)Am I missing anything from the steps above...based on prior
experiences performing upgrades of this kind?

2)Should I upgrade to Ubuntu 16.04 first and then upgrade Ceph...or vice
versa?
This documentation (http://docs.ceph.com/docs/master/start/os-recommendations/) suggest to stick with Ubuntu 14.04 but RHCS KB show that RHCS 2.x (Jewel 10.2.x) is only supported on Ubuntu 16.04.
When upgrading from Hammer to Jewel, we upgraded OS first from RHEL 7 to 7.1 then RHCS. I'm not sure whether you should temporarily run Hammer on Ubuntu 16.04 or Jewel on Ubuntu 14.04.
I would upgrade the lowest layer first (OS) of a single OSD node and see how it goes.

Regards,

Frederic.




Thanks in advance,
Shain

--
NPR | Shain Miley | Manager of Infrastructure, Digital Media | smiley@xxxxxxx | 202.513.3649

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux