Re: Jewel ubuntu release is half cooked

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,


On Mon, May 23, 2016 at 8:24 PM, Anthony D'Atri <aad@xxxxxxxxxxxxxx> wrote:
>
>
> Re:
>
>> 2. Inefficient chown documentation - The documentation states that one should "chown -R ceph:ceph /var/lib/ceph" if one is looking to have ceph-osd ran as user ceph and not as root. Now, this command would run a chown process one osd at a time. I am considering my cluster to be a fairly small cluster with just 30 osds between 3 osd servers. It takes about 60 minutes to run the chown command on each osd (3TB disks with about 60% usage). It would take about 10 hours to complete this command on each osd server, which is just mad in my opinion. I can't imagine this working well at all on servers with 20-30 osds! IMHO the docs should be adjusted to instruct users to run the chown in _parallel_ on all osds instead of doing it one by one.
>
>
> I suspect the docs are playing it safe there, Ceph runs on servers of widely varying scale, capabilities, and robustness.  Running 30 chown -R processes in parallel could present noticeable impact on a production server.

I did this process in two separate isolated steps:
- first I upgraded ensuring the "setuser match path =
/var/lib/ceph/$type/$cluster-$id" was set. It's in the 9.2.0 and
10.2.0 release notes. This meant that after the upgrade everything was
still running as root as before, and there was no need to change the
permissions.
- then one daemon at a time I chown -R to make it root (it didn't
change anything but it read all the FS information), stopped the
daemon, rerun the chown to ceph (which was now very fast) and started
that daemon. Actual downtime per daemon was under 5 minutes. I did set
the noout flag for the OSDs.

To help with the journal I created a udev rules file and the devices
were already owned by ceph: root was still able to use it.

This process works for both OSD and MON. I have yet to do the MDS, and
my radosgw were already running as ceph user.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux