Hi,
We initially upgraded from Hammer to Jewel while keeping the ownership unchanged, by adding "setuser match path = /var/lib/ceph/$type/$cluster-$id" in ceph.conf
Later, we used the following steps to change from running as root to running as ceph.
On the storage nodes, we ran the following command that doesn't change permissions, but caches the filesystem (based on http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-November/006013.html)
find /var/lib/ceph/osd -maxdepth 1 -mindepth 1 -print | xargs -P12 -n1 chown -R root:root
Set noout:
ceph osd set noout
On Storage node:
Edited "/etc/ceph/ceph.conf" and commented out #setuser match path = /var/lib/ceph/$type/$cluster-$id
stop ceph-osd-all
find /var/lib/ceph/osd -maxdepth 1 -mindepth 1 -print | xargs -P12 -n1 chown -R ceph:ceph
chown -R ceph:ceph /var/lib/ceph/
start ceph-osd-all
Check that all the Ceph OSD processes are running:
ps aux | grep ceph | egrep –v grep
Unset "noout":
ceph osd unset noout
Wait till ceph is healthy again and continue with the next storage node.
The OSDs were down for about 2 min because we ran the find command before hand and used xargs with 12 parallel processes, so recovery time was quick as well.We initially upgraded from Hammer to Jewel while keeping the ownership unchanged, by adding "setuser match path = /var/lib/ceph/$type/$cluster-$id" in ceph.conf
Later, we used the following steps to change from running as root to running as ceph.
On the storage nodes, we ran the following command that doesn't change permissions, but caches the filesystem (based on http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-November/006013.html)
find /var/lib/ceph/osd -maxdepth 1 -mindepth 1 -print | xargs -P12 -n1 chown -R root:root
Set noout:
ceph osd set noout
On Storage node:
Edited "/etc/ceph/ceph.conf" and commented out #setuser match path = /var/lib/ceph/$type/$cluster-$id
stop ceph-osd-all
find /var/lib/ceph/osd -maxdepth 1 -mindepth 1 -print | xargs -P12 -n1 chown -R ceph:ceph
chown -R ceph:ceph /var/lib/ceph/
start ceph-osd-all
Check that all the Ceph OSD processes are running:
ps aux | grep ceph | egrep –v grep
Unset "noout":
ceph osd unset noout
Wait till ceph is healthy again and continue with the next storage node.
On Tue, Mar 14, 2017 at 3:27 AM, Richard Arends <cephmailinglist@xxxxxxxxx> wrote:
On 03/13/2017 02:02 PM, Christoph Adomeit wrote:
Christoph,
Thanks for the detailed upgrade report.
We have another scenario: We have allready upgraded to jewel 10.2.6 but
we are still running all our monitors and osd daemons as root using the
setuser match path directive.
What would be the recommended way to have all daemons running as ceph:ceph user ?
Could we chown -R the monitor and osd data directories under /var/lib/ceph one by one while keeping up service ?
Yes. To minimize the down time, you can do the chown twice. Once before restarting the daemons, while they are running with root user permissions. Then stop the daemons, do the chown again, but then only on the changed files (find /var/lib/ceph/ ! -uid 64045 -print0|xargs -0 chown ceph:ceph) and start the Ceph daemons with setuser and setgroup set to ceph
--
With regards,
Richard Arends.
Snow BV / http://snow.nl
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com