Actually you didn't need to do a udev rule for raw journals. Disk
devices in gentoo have their group ownership set to 'disk'. I only
needed to drop ceph into that in /etc/group when going from hammer to
infernalis.
Did you poke around any of the ceph howto's on the gentoo wiki? It's
been a while since I wrote this guide when I first rolled out with firefly:
https://wiki.gentoo.org/wiki/Ceph/Guide
That used to be https://wiki.gentoo.org/wiki/Ceph before other people
came in behind me and expanded on things
I've pretty much had these bookmarks sitting around forever for adding
and removing mons and osds
http://docs.ceph.com/docs/master/rados/operations/add-or-rm-mons/
http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/
For the MDS server I think I originally went to this blog which also has
other good info.
http://www.sebastien-han.fr/blog/2013/05/13/deploy-a-ceph-mds-server/
On 05/01/2016 06:46 AM, Stuart Longland wrote:
Hi all,
This evening I was in the process of deploying a ceph cluster by hand.
I did it by hand because to my knowledge, ceph-deploy doesn't support
Gentoo, and my cluster here runs that.
The instructions I followed are these ones:
http://docs.ceph.com/docs/master/install/manual-deployment and I'm
running the 10.0.2 release of Ceph:
ceph version 10.0.2 (86764eaebe1eda943c59d7d784b893ec8b0c6ff9)
Things went okay bootstrapping the monitors. I'm running a 3-node
cluster, with OSDs and monitors co-located. Each node has a 1TB 2.5"
HDD and a 40GB partition on SSD for the journal.
Things went pear shaped however when I tried bootstrapping the OSDs.
All was going fine until it came time to activate my first OSD.
ceph-disk activate barfed because I didn't have the bootstrap-osd key.
No one told me I needed to create one, or how to do it. There's a brief
note about using --activate-key, but no word on what to pass as the
argument. I tried passing in my admin keyring in /etc/ceph, but it
didn't like that.
In the end, I muddled my way through the manual OSD deployment steps,
which worked fine. After correcting permissions for the ceph user, I
found the OSDs came up. As an added bonus, I now know how to work
around the journal permission issue at work since I've reproduced it
here, using a UDEV rules file like the following:
SUBSYSTEM=="block", KERNEL=="sda7", OWNER="ceph", GROUP="ceph", MODE="0600"
The cluster seems to be happy enough now, but some notes on how one
generates the OSD activation keys to use with `ceph-disk activate` would
be a big help.
Regards,
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com