Thank you both for the valuable information.
Would you mind clearing up these 2 questions based on your
suggestions below?
1)Frederic...I see now the benefits of setting
'setuser_match_path' as suggested below, it sounds like there is
very little upside to making the permissions changes at this
time. We currently have 200 OSD's ranging from 4 to 6 TB
each...it sounds the the permission changes might take several
hours and not really yield any real world benefits.
Are most people skipping this step...or biting the bullet now and
just dealing with the downtime...are there any known downsides to
setting 'setuser_match_path' that I am not currently thinking
about?
2)David...can you please provide me with a little bit more detail
on what you mean by 'You will also need to change the file that is
touched on each of your daemons from upstart to systemd in each
OSD, mon, mds, etc'...I am a little unclear on exactly what you
mean by that.
Thanks again for all your help so far.
Shain
On 05/17/2017 10:30 AM, David Turner
wrote:
If you are unfamiliar with using the systemd syntax
of `systemctl restart ceph.target` or `systemd restart
ceph-osd@12`, then I would recommend upgrading the cluster
first, get it working, then upgrade your OS when you have some
time to play with it and figure out how to use systemd instead
of upstart. You will also need to change the file that is
touched on each of your daemons from upstart to systemd in each
OSD, mon, mds, etc. If you are comfortable using systemd, then
go ahead and upgrade the OS first and get the cluster working
with systemd before upgrading to jewel.
Hello,
I am going to be upgrading our production Ceph cluster
from
Hammer/Ubuntu 14.04 to Jewel/Ubuntu 16.04 and I wanted
to ask a question
and sanity check my upgrade plan.
Here are the steps I am planning to take during the
upgrade:
Hi Shain,
0) upgrade operating system packages first and reboot
on new kernel if needed.
1)Upgrade
to latest hammer on current cluster
2)Remove or rename the existing ‘ceph’ user and ‘ceph’
group on each node
3)Upgrade the ceph packages to latest Jewel (mon, then
osd, then rbd
clients)
You might want to upgrade the RBD clients first. This
may not be a mandatory step but a careful one.
4)stop
ceph daemons
5)change permissions on ceph directories and osd
journals:
find /var/lib/ceph/osd -maxdepth 1 -mindepth 1 -type
d|parallel chown -R
64045:64045
chown 64045:64045 /var/lib/ceph
chown 64045:64045 /var/lib/ceph/*
chown 64045:64045 /var/lib/ceph/bootstrap-*/*
for ID in $(ls /var/lib/ceph/osd/|cut -d '-' -f 2); do
JOURNAL=$(readlink -f
/var/lib/ceph/osd/ceph-${ID}/journal)
chown ceph ${JOURNAL}
You can avoid this step by adding
setuser_match_path = /var/lib/ceph/$type/$cluster-$id
to the [osd] section. This will make the Ceph daemons
run as root if the daemon’s data directory is still
owned by root.
Newly deployed daemons will be created with data
owned by user ceph and will run with reduced
privileges, but upgraded daemons will continue to run
as root.
Or you can still change the property of the files
to ceph but it might be long depending on the number
of objects and PGs you have in your cluster, for an
average zero benefit, especially since when bluestore
comes out, you'll recreate all these datas.
6)restart
ceph daemons
The two questions I have are:
1)Am I missing anything from the steps above...based
on prior
experiences performing upgrades of this kind?
2)Should I upgrade to Ubuntu 16.04 first and then
upgrade Ceph...or vice
versa?
When upgrading from Hammer to Jewel, we upgraded OS
first from RHEL 7 to 7.1 then RHCS. I'm not sure whether
you should temporarily run Hammer on Ubuntu 16.04 or
Jewel on Ubuntu 14.04.
I would upgrade the lowest layer first (OS) of a
single OSD node and see how it goes.
Regards,
Frederic.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
NPR | Shain Miley | Manager of Infrastructure, Digital Media | smiley@xxxxxxx | 202.513.3649