Hi Bill, On 02/05/16 04:37, Bill Sharer wrote: > Actually you didn't need to do a udev rule for raw journals. Disk > devices in gentoo have their group ownership set to 'disk'. I only > needed to drop ceph into that in /etc/group when going from hammer to > infernalis. Yeah, I recall trying that on the Ubuntu-based Ceph cluster at work, and Ceph still wasn't happy, hence I've gone the route of making the partition owned by the ceph user. > Did you poke around any of the ceph howto's on the gentoo wiki? It's > been a while since I wrote this guide when I first rolled out with firefly: > > https://wiki.gentoo.org/wiki/Ceph/Guide > > That used to be https://wiki.gentoo.org/wiki/Ceph before other people > came in behind me and expanded on things No, hadn't looked at that. > I've pretty much had these bookmarks sitting around forever for adding > and removing mons and osds > > http://docs.ceph.com/docs/master/rados/operations/add-or-rm-mons/ > http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/ > > For the MDS server I think I originally went to this blog which also has > other good info. > > http://www.sebastien-han.fr/blog/2013/05/13/deploy-a-ceph-mds-server/ That might be my next step, depending on how stable CephFS is now. One thing that has worried me is since you can only deploy one MDS, what happens if that MDS goes down? If it's simply a case of spin up another one, then fine, I can put up with a little downtime. If there's data loss though, then no, that's not good. -- Stuart Longland Systems Engineer _ ___ \ /|_) | T: +61 7 3535 9619 \/ | \ | 38b Douglas Street F: +61 7 3535 9699 SYSTEMS Milton QLD 4064 http://www.vrt.com.au _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com