Re: ceph stays degraded after crushmap rearrangement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Am 05.01.2013 18:56, schrieb Sage Weil:
But my rbd images are gone ?!

[1202: ~]# rbd -p kvmpool1 ls
[1202: ~]#

Oh.. I think this is related to the librados/librbd compatibility issue I
mentioned yesterday.  Please make sure the clients (librados, librbd) are
also running the latest testing branch.

ah OK - thanks that's it - ceph has now also recovered completely with old crushmap.

OK now back to my original problem.

i wanted to change from this:
-----------------------------------------
...

rack D2-switchA {
        id -100         # do not change unnecessarily
        # weight 12.000
        alg straw
        hash 0  # rjenkins1
        item server1263 weight 4.000
        item server1264 weight 4.000
        item server1265 weight 4.000
}
rack D2-switchB {
        id -101         # do not change unnecessarily
        # weight 12.000
        alg straw
        hash 0  # rjenkins1
        item server1266 weight 4.000
        item server1267 weight 4.000
        item server1268 weight 4.000
}
root root {
        id -10000               # do not change unnecessarily
        # weight 24.000
        alg straw
        hash 0  # rjenkins1
        item D2-switchA weight 12.000
        item D2-switchB weight 12.000
}

...
-----------------------------------------

to this one:

-----------------------------------------
...

rack D2 {
        id -100         # do not change unnecessarily
        # weight 24.000
        alg straw
        hash 0  # rjenkins1
        item cloud1-1263 weight 4.000
        item cloud1-1264 weight 4.000
        item cloud1-1265 weight 4.000
        item cloud1-1266 weight 4.000
        item cloud1-1267 weight 4.000
        item cloud1-1268 weight 4.000
}
root root {
        id -10000               # do not change unnecessarily
        # weight 24.000
        alg straw
        hash 0  # rjenkins1
        item D2 weight 24.000
}

...
-----------------------------------------

This was where all problems started. Is this wrong? / not possible?

Greets,
Stefan
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux