Hi Mark,
Thanks for your reply. I do not think I am running the packaged version.
The output shows it is my version (0.48.2argonaut.fast at commit 000...).
root@client:/users/utos# rbd -v
ceph version 0.48.2argonaut.fast
(commit:000000000000000000000000000000000000000000000)
root@client:/users/utos# /usr/bin/rbd -v
ceph version 0.48.2argonaut
(commit:3e02b2fad88c2a95d9c0c86878f10d1beb780bfe)
root@client:/users/utos# /usr/local/bin/rbd -v
ceph version 0.48.2argonaut.fast
(commit:000000000000000000000000000000000000000000000)
Xing
On 01/05/2013 08:00 PM, Mark Kirkwood wrote:
I'd hazard a guess that you are still (accidentally) running the
packaged binary - the packaged version installs in /usr/bin (etc) but
your source build will probably be in /usr/local/bin. I've been
through this myself and purged the packaged version before building
and installing from source (just to be sure).
Cheers
Mark
On 06/01/13 14:55, Xing Lin wrote:
After changing the client-side code, I can map/unmap rbd block devices
at client machines. However, I am not able to list rbd block devices. At
the client machine, I first installed 0.48.2argonaut package for Ubuntu
then I compiled and installed my own version according to instructions
on this page ( http://ceph.com/docs/master/install/building-ceph/). The
client failed to recognize the fifth bucket algorithm I added. I
searched "unsupported bucket algorithm" in the ceph code base and that
text only appeared in the src/crush/CrushWrapper.cc. I checked
decode_crush_bucket() and it should be able to recognize the fifth
algorithm. Even after I changed the error message (added print of
"[XXX]" and values for two bucket algorithm macros), it still prints the
same error message. So, it seems that my new version of CrushWrapper.cc
is not used during compilation to create the final rbd binary. Would you
please tell me where the problem is and how I can fix it? Thank you very
much.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html