"won leader election with quorum" during "osd setcrushmap"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
due to PG-trouble with an EC-Pool I modify the crushmap (step set_choose_tries 200) from

rule ec7archiv {
        ruleset 6
        type erasure
        min_size 3
        max_size 20
        step set_chooseleaf_tries 5
        step take default
        step chooseleaf indep 0 type host
        step emit
}

to

rule ec7archiv {
        ruleset 6
        type erasure
        min_size 3
        max_size 20
        step set_chooseleaf_tries 5
        step set_choose_tries 200
        step take default
        step chooseleaf indep 0 type host
        step emit
}

"ceph osd setcrushmap" runs since one hour and ceph -w give following output:

2015-03-25 17:20:18.163295 mon.0 [INF] mdsmap e766: 1/1/1 up {0=b=up:active}, 1 up:standby
2015-03-25 17:20:18.163370 mon.0 [INF] osdmap e130004: 91 osds: 91 up, 91 in
2015-03-25 17:20:28.525445 mon.0 [INF] from='client.? 172.20.2.1:0/1007537' entity='client.admin' cmd=[{"prefix": "osd
setcrushmap"}]: dispatch
2015-03-25 17:20:28.525580 mon.0 [INF] mon.0 calling new monitor election
2015-03-25 17:20:28.526263 mon.0 [INF] mon.0@0 won leader election with quorum 0,1,2


Fortunaly the clients have still access to the cluster (kvm)!!

How long take such an setcrushmap?? Normaly it's done in few seconds.
Has the setcrushmap chance to get ready?

Udo
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux