Re: monitor failover of ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You must have a quorum or MORE than 50% of your monitors functioning for the cluster to function.  With one of two you only have 50% which isn't enough and stops i/o.

Sent from my iPad

> On Oct 11, 2013, at 11:28 PM, "飞" <duron800@xxxxxx> wrote:
> 
> hello, I am a new user of ceph,
> I have built a ceph testing Environment for block storage,
> I have 2 osd and 2 monitor,In addition to failover test, other tests are normal.
> when I perform failover test, if I stop one osd , the cluster is OK,
> but if I stop one monitor , the cluster have entire die , why ? thank you.
> 
> my configure file :
> ; global
> [global]
>  ; enable secure authentication
>  ; auth supported = cephx
>  
>  auth cluster required = none
>  auth service required = none
>  auth client required = none
>  
>  mon clock drift allowed = 3
>  
> ;  monitors
> ;  You need at least one.  You need at least three if you want to
> ;  tolerate any node failures.  Always create an odd number.
> [mon]
>  mon data = /home/ceph/mon$id
>  ; some minimal logging (just message traffic) to aid debugging
>  debug ms = 1
> [mon.0]
>  host = sheepdog1
>  mon addr = 192.168.0.19:6789
>  
> [mon.1]
>  mon data = /var/lib/ceph/mon.$id
>         host = sheepdog2
>         mon addr = 192.168.0.219:6789
>  
> ; mds
> ;  You need at least one.  Define two to get a standby.
> [mds]
>  ; where the mds keeps it's secret encryption keys
>  keyring = /home/ceph/keyring.mds.$id
> [mds.0]
>  host = sheepdog1
> ; osd
> ;  You need at least one.  Two if you want data to be replicated.
> ;  Define as many as you like.
> [osd]
> ; This is where the btrfs volume will be mounted.      
>  osd data = /home/ceph/osd.$id
>  osd journal = /home/ceph/osd.$id/journal
>  osd journal size = 512
>  ; working with ext4
>  filestore xattr use omap = true
>  
>  ; solve rbd data corruption
>  filestore fiemap = false
> 
> [osd.0]
>         host = sheepdog1
>         osd data = /var/lib/ceph/osd/diskb
>         osd journal = /var/lib/ceph/osd/diskb/journal
> [osd.2]
>  host = sheepdog2
>  osd data = /var/lib/ceph/osd/diskc
>  osd journal = /var/lib/ceph/osd/diskc/journal
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux