Re: Deploying ceph by hand: a few omissions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have an active and a standby setup. The faillover takes less than a minute if you manually stop the active service. Add whatever the timeout is for the faillover to happen if things go pear shaped for the box.

Things are back to letters now for mds servers. I had started with letters on firefly as recommended. Then somewhere (giant?), I was getting prodded to use numbers instead. Now with later hammer and infernalis, I'm back to getting scolded for not using letters :-)

I'm holding off on jewel for the moment until I get things straightened out with the kde4 to plasma upgrade. I think that one got stablized before it was quite ready for prime time. Even then I'll probably take a good long time to backup some stuff before I try out the shiny new fsck utility.

On 05/01/2016 07:13 PM, Stuart Longland wrote:
Hi Bill,
On 02/05/16 04:37, Bill Sharer wrote:
Actually you didn't need to do a udev rule for raw journals.  Disk
devices in gentoo have their group ownership set to 'disk'.  I only
needed to drop ceph into that in /etc/group when going from hammer to
infernalis.
Yeah, I recall trying that on the Ubuntu-based Ceph cluster at work, and
Ceph still wasn't happy, hence I've gone the route of making the
partition owned by the ceph user.

Did you poke around any of the ceph howto's on the gentoo wiki? It's
been a while since I wrote this guide when I first rolled out with firefly:

https://wiki.gentoo.org/wiki/Ceph/Guide

That used to be https://wiki.gentoo.org/wiki/Ceph before other people
came in behind me and expanded on things
No, hadn't looked at that.

I've pretty much had these bookmarks sitting around forever for adding
and removing mons and osds

http://docs.ceph.com/docs/master/rados/operations/add-or-rm-mons/
http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/

For the MDS server I think I originally went to this blog which also has
other good info.

http://www.sebastien-han.fr/blog/2013/05/13/deploy-a-ceph-mds-server/
That might be my next step, depending on how stable CephFS is now.  One
thing that has worried me is since you can only deploy one MDS, what
happens if that MDS goes down?

If it's simply a case of spin up another one, then fine, I can put up
with a little downtime.  If there's data loss though, then no, that's
not good.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux