Re: Hammer to Jewel upgrade questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



1) This is a VERY common method to make the upgrade path not take days-weeks to get your cluster from hammer to jewel.  You will definitely want to complete this process so that you no longer need to run with this setting in your config file, but it doesn't need to be done during the upgrade.  There are some things that you cannot do while this is in your config file.  You will want to finish this before adding in additional storage.  There are weird bugs when trying to add/start new daemons where they will crash because it can't find the owner/group that owns the folder because the folder is being created.  If you need to add additional storage before you finish this, then remove these settings from your config file temporarily while you add the storage.  After it is added and in the cluster you can put it back.

2) Inside of each daemon's folder (/var/lib/ceph/mon/mon1/, /var/lib/ceph/osd/ceph-2/, etc) there is a file that has no contents in it.  It's name is the important part.  It is named for the type of service management your system is using.  In your cluster you will find an empty file named 'upstart' in all of your ceph daemon folders.  upstart is no longer used in ubuntu 16.04.  You will want to rename that file, remove and touch a new file, etc, whatever... so that there is no longer an upstart file and there is a systemd file instead.

On Wed, May 17, 2017 at 11:44 AM Shain Miley <smiley@xxxxxxx> wrote:

Thank you both for the valuable information.

Would you mind clearing up these 2 questions based on your suggestions below?

1)Frederic...I see now the benefits of setting 'setuser_match_path' as suggested below, it sounds like there is very little upside to making the permissions changes at this time.  We currently have 200 OSD's ranging from 4 to 6 TB each...it sounds the the permission changes might take several hours and not really yield any real world benefits. 

Are most people skipping this step...or biting the bullet now and just dealing with the downtime...are there any known downsides to setting 'setuser_match_path' that I am not currently thinking about?

2)David...can you please provide me with a little bit more detail on what you mean by 'You will also need to change the file that is touched on each of your daemons from upstart to systemd in each OSD, mon, mds, etc'...I am a little unclear on exactly what you mean by that.

Thanks again for all your help so far.

Shain




On 05/17/2017 10:30 AM, David Turner wrote:
If you are unfamiliar with using the systemd syntax of `systemctl restart ceph.target` or `systemd restart ceph-osd@12`, then I would recommend upgrading the cluster first, get it working, then upgrade your OS when you have some time to play with it and figure out how to use systemd instead of upstart.  You will also need to change the file that is touched on each of your daemons from upstart to systemd in each OSD, mon, mds, etc.  If you are comfortable using systemd, then go ahead and upgrade the OS first and get the cluster working with systemd before upgrading to jewel.

On Wed, May 17, 2017 at 2:25 AM Frédéric Nass <frederic.nass@xxxxxxxxxxxxxxxx> wrote:


----- Le 16 Mai 17, à 20:43, Shain Miley <smiley@xxxxxxx> a écrit :
Hello,

I am going to be upgrading our production Ceph cluster from
Hammer/Ubuntu 14.04 to Jewel/Ubuntu 16.04 and I wanted to ask a question
and sanity check my upgrade plan.

Here are the steps I am planning to take during the upgrade:
Hi Shain,

0) upgrade operating system packages first and reboot on new kernel if needed.


1)Upgrade to latest hammer on current cluster
2)Remove or rename the existing ‘ceph’ user and ‘ceph’ group on each node
3)Upgrade the ceph packages to latest Jewel (mon, then osd, then rbd
clients)
You might want to upgrade the RBD clients first. This may not be a mandatory step but a careful one.


4)stop ceph daemons
5)change permissions on ceph directories and osd journals:

find /var/lib/ceph/osd -maxdepth 1 -mindepth 1 -type d|parallel chown -R
64045:64045
chown 64045:64045 /var/lib/ceph
chown 64045:64045 /var/lib/ceph/*
chown 64045:64045 /var/lib/ceph/bootstrap-*/*

for ID in $(ls /var/lib/ceph/osd/|cut -d '-' -f 2); do
     JOURNAL=$(readlink -f /var/lib/ceph/osd/ceph-${ID}/journal)
     chown ceph ${JOURNAL}
You can avoid this step by adding setuser_match_path = /var/lib/ceph/$type/$cluster-$id to the [osd] section. This will make the Ceph daemons run as root if the daemon’s data directory is still owned by root.
Newly deployed daemons will be created with data owned by user ceph and will run with reduced privileges, but upgraded daemons will continue to run as root.

Or you can still change the property of the files to ceph but it might be long depending on the number of objects and PGs you have in your cluster, for an average zero benefit, especially since when bluestore comes out, you'll recreate all these datas.

6)restart ceph daemons

The two questions I have are:
1)Am I missing anything from the steps above...based on prior
experiences performing upgrades of this kind?

2)Should I upgrade to Ubuntu 16.04 first and then upgrade Ceph...or vice
versa?
This documentation (http://docs.ceph.com/docs/master/start/os-recommendations/) suggest to stick with Ubuntu 14.04 but RHCS KB show that RHCS 2.x (Jewel 10.2.x) is only supported on Ubuntu 16.04.
When upgrading from Hammer to Jewel, we upgraded OS first from RHEL 7 to 7.1 then RHCS. I'm not sure whether you should temporarily run Hammer on Ubuntu 16.04 or Jewel on Ubuntu 14.04.
I would upgrade the lowest layer first (OS) of a single OSD node and see how it goes.

Regards,

Frederic.




Thanks in advance,
Shain

--
NPR | Shain Miley | Manager of Infrastructure, Digital Media | smiley@xxxxxxx | 202.513.3649

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
NPR | Shain Miley | Manager of Infrastructure, Digital Media | smiley@xxxxxxx | 202.513.3649
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux