HI Aaron,
sorry for taking so long... After i add the osd and buckets to the crushmap i get ceph osd tree # id weight type name up/down reweight -3 1 host dp2 1 1 osd.1 up 1 -2 1 host dp1 0 1 osd.0 up 1 -1 0 root default Both osds are up and in ceph osd stat e25: 2 osds: 2 up, 2 in ceph health detail says: HEALTH_WARN 292 pgs stuck inactive; 292 pgs stuck unclean; clock skew detected on mon.vmsys-dp2 pg 3.f is stuck inactive since forever, current state creating, last acting [] pg 0.c is stuck inactive since forever, current state creating, last acting [] pg 1.d is stuck inactive since forever, current state creating, last acting [] pg 2.e is stuck inactive since forever, current state creating, last acting [] pg 3.8 is stuck inactive since forever, current state creating, last acting [] pg 0.b is stuck inactive since forever, current state creating, last acting [] pg 1.a is stuck inactive since forever, current state creating, last acting [] ... pg 2.c is stuck unclean since forever, current state creating, last acting [] pg 1.f is stuck unclean since forever, current state creating, last acting [] pg 0.e is stuck unclean since forever, current state creating, last acting [] pg 3.d is stuck unclean since forever, current state creating, last acting [] pg 2.f is stuck unclean since forever, current state creating, last acting [] pg 1.c is stuck unclean since forever, current state creating, last acting [] pg 0.d is stuck unclean since forever, current state creating, last acting [] pg 3.e is stuck unclean since forever, current state creating, last acting [] mon.vmsys-dp2 addr 10.0.0.22:6789/0 clock skew 16.4914s > max 0.05s (latency 0.00666228s) All pgs have the same status. Is the clock skew an important fact ? I compiled ceph like this - eix ceph: ... Installed versions: 0.67{tbz2}(00:54:50 01/08/14)(fuse -debug -gtk -libatomic -radosgw -static-libs -tcmalloc) cluster name is vmsys, servers are dp1 and dp2 config: [global] auth cluster required = none auth service required = none auth client required = none auth supported = none fsid = 265d12ac-e99d-47b9-9651-05cb2b4387a6 [mon.vmsys-dp1] host = dp1 mon addr = INTERNAL-IP1:6789 mon data = ""> [mon.vmsys-dp2] host = dp2 mon addr = INTERNAL-IP2:6789 mon data = ""> [osd] [osd.0] host = dp1 devs = /dev/sdb1 osd_mkfs_type = xfs osd data = ""> [osd.1] host = dp2 devs = /dev/sdb1 osd_mkfs_type = xfs osd data = ""> [mds.vmsys-dp1] host = dp1 [mds.vmsys-dp2] host = dp2 Hope this is helpful - i really don't know at the moment what is wrong. Perhaps i try the manual-deploy howto from inktank or do you have an idea ? Best Philipp http://www.pilarkto.netAm 10.01.2014 20:50, schrieb Aaron Ten Clay:
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com