hello, I am a new user of ceph,
I have built a ceph testing Environment for block storage,
I have 2 osd and 2 monitor,In addition to failover test, other tests are normal.
when I perform failover test, if I stop one osd , the cluster is OK,
but if I stop one monitor , the cluster have entire die , why ? thank you.
my configure file :
my configure file :
; global
[global]
; enable secure authentication
; auth supported = cephx
auth cluster required = none
auth service required = none
auth client required = none
mon clock drift allowed = 3
; monitors
; You need at least one. You need at least three if you want to
; tolerate any node failures. Always create an odd number.
[mon]
mon data = "">
; some minimal logging (just message traffic) to aid debugging
debug ms = 1
[mon.0]
host = sheepdog1
mon addr = 192.168.0.19:6789
[mon.1]
mon data = "">
host = sheepdog2
mon addr = 192.168.0.219:6789
; mds
; You need at least one. Define two to get a standby.
[mds]
; where the mds keeps it's secret encryption keys
keyring = /home/ceph/keyring.mds.$id
[mds.0]
host = sheepdog1
; osd
; You need at least one. Two if you want data to be replicated.
; Define as many as you like.
[osd]
; This is where the btrfs volume will be mounted.
osd data = "">
[Index of Archives]
[Information on CEPH]
[Linux Filesystem Development]
[Ceph Development]
[Ceph Large]
[Ceph Dev]
[Linux USB Development]
[Video for Linux]
[Linux Audio Users]
[Yosemite News]
[Linux Kernel]
[Linux SCSI]
[xfs]
osd journal = /home/ceph/osd.$id/journal
osd journal size = 512
; working with ext4
filestore xattr use omap = true
; solve rbd data corruption
filestore fiemap = false
[osd.0]
host = sheepdog1
osd data = "">
osd journal = /var/lib/ceph/osd/diskb/journal
[osd.2]
host = sheepdog2
osd data = "">
osd journal = /var/lib/ceph/osd/diskc/journal
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
- Follow-Ups:
- Re: monitor failover of ceph
- From: Michael Lowe
- Re: monitor failover of ceph
- Prev by Date: Re: cephforum.com
- Next by Date: Re: monitor failover of ceph
- Previous by thread: radosgw-admin doesn't list user anymore
- Next by thread: Re: monitor failover of ceph
- Index(es):