Re: Upgrade from 12.2.1 to 12.2.2 broke my CephFs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/12/2017 15:13, Tobias Prousa wrote:
Hi there,

I'm running a CEPH cluster for some libvirt VMs and a CephFS providing /home to ~20 desktop machines. There are 4 Hosts running 4 MONs, 4MGRs, 3MDSs (1 active, 2 standby) and 28 OSDs in total. This cluster is up and running since the days of Bobtail (yes, including CephFS).

Might consider shutting down 1 MON, since MONs need to be in odd numbers, And for you cluster 3 is more than sufficient.

For reasons why, read either the Ceph docs, or search this maillinglist.

Probably doesn;t help with your problem, but could you help prevent a split-brain situation in the future.

--WjW


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux