Re: Cluster Map Problems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Greg,

setting the osd manually out triggered the recovery.
But now it is the question, why is the osd not marked out after 300
seconds? That's a default cluster, I use the 0.59 build from your site.
And I didn't change any value, except for the crushmap.

That's my ceph.conf.

-martin

[global]
	auth cluster requierd = none
	auth service required = none
	auth client required = none
#	log file = ""
        log_max_recent=100
        log_max_new=100

[mon]
	mon data = /data/mon.$id
[mon.a]
	host = store1
	mon addr = 192.168.195.31:6789
[mon.b]
	host = store3
	mon addr = 192.168.195.33:6789
[mon.c]
	host = store5
	mon addr = 192.168.195.35:6789
[osd]
	journal aio = true
	osd data = /data/osd.$id
	osd mount options btrfs = rw,noatime,nodiratime,autodefrag
	osd mkfs options btrfs = -n 32k -l 32k

[osd.0]
	host = store1
	osd journal = /dev/sdg1
	btrfs devs = /dev/sdc
[osd.1]
	host = store1
	osd journal = /dev/sdh1
	btrfs devs = /dev/sdd
[osd.2]
	host = store1
	osd journal = /dev/sdi1
	btrfs devs = /dev/sde
[osd.3]
	host = store1
	osd journal = /dev/sdj1
	btrfs devs = /dev/sdf
[osd.4]
	host = store2
	osd journal = /dev/sdg1
	btrfs devs = /dev/sdc
[osd.5]
	host = store2
	osd journal = /dev/sdh1
	btrfs devs = /dev/sdd
[osd.6]
	host = store2
	osd journal = /dev/sdi1
	btrfs devs = /dev/sde
[osd.7]
	host = store2
	osd journal = /dev/sdj1
	btrfs devs = /dev/sdf
[osd.8]
	host = store3
	osd journal = /dev/sdg1
	btrfs devs = /dev/sdc
[osd.9]
	host = store3
	osd journal = /dev/sdh1
	btrfs devs = /dev/sdd
[osd.10]
	host = store3
	osd journal = /dev/sdi1
	btrfs devs = /dev/sde
[osd.11]
	host = store3
	osd journal = /dev/sdj1
	btrfs devs = /dev/sdf
[osd.12]
	host = store4
	osd journal = /dev/sdg1
	btrfs devs = /dev/sdc
[osd.13]
	host = store4
	osd journal = /dev/sdh1
	btrfs devs = /dev/sdd
[osd.14]
	host = store4
	osd journal = /dev/sdi1
	btrfs devs = /dev/sde
[osd.15]
	host = store4
	osd journal = /dev/sdj1
	btrfs devs = /dev/sdf
[osd.16]
	host = store5
	osd journal = /dev/sdg1
	btrfs devs = /dev/sdc
[osd.17]
	host = store5
	osd journal = /dev/sdh1
	btrfs devs = /dev/sdd
[osd.18]
	host = store5
	osd journal = /dev/sdi1
	btrfs devs = /dev/sde
[osd.19]
	host = store5
	osd journal = /dev/sdj1
	btrfs devs = /dev/sdf
[osd.20]
	host = store6
	osd journal = /dev/sdg1
	btrfs devs = /dev/sdc
[osd.21]
	host = store6
	osd journal = /dev/sdh1
	btrfs devs = /dev/sdd
[osd.22]
	host = store6
	osd journal = /dev/sdi1
	btrfs devs = /dev/sde
[osd.23]
	host = store6
	osd journal = /dev/sdj1
	btrfs devs = /dev/sdf


On 28.03.2013 19:01, Gregory Farnum wrote:
> Your crush map looks fine to me. I'm saying that your ceph -s output
> showed the OSD still hadn't been marked out. No data will be migrated
> until it's marked out.
> After ten minutes it should have been marked out, but that's based on
> a number of factors you have some control over. If you just want a
> quick check of your crush map you can mark it out manually, too.
> -Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux