Hmmm, create another image in another pool. Pool without cache tier.
[root@ceph01 cluster]# rbd create test/image --size 102400
[root@ceph01 cluster]# rbd -p test ls -l
NAME SIZE PARENT FMT PROT LOCK
image 102400M 1
[root@ceph01 cluster]# ceph osd dump | grep test
pool 4 'test' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 100 pgp_num 100 last_change 2049 owner 0 flags hashpspool stripe_width 0
Get the same error...
[root@ceph01 cluster]# rbd map -p test image
rbd: add failed: (5) Input/output error
Mar 25 13:53:56 ceph01 kernel: libceph: client11343 fsid 10b46114-ac17-404e-99e3-69b34b85c901
Mar 25 13:53:56 ceph01 kernel: libceph: got v 13 cv 11 > 9 of ceph_pg_pool
Mar 25 13:53:56 ceph01 kernel: libceph: osdc handle_map corrupt msg
Mar 25 13:53:56 ceph01 kernel: libceph: mon0 192.168.100.201:6789 session established
Maybe I'm doing wrong?
Thanks.
2014-03-25 13:34 GMT+04:00 Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>:
On Tue, Mar 25, 2014 at 10:59 AM, Ирек Фасихов <malmyzh@xxxxxxxxx> wrote:Ah, this error means that the kernel received an unsupported version of
> Ilya, set "chooseleaf_vary_r 0", but no map rbd images.
>
> [root@ceph01 cluster]# rbd map rbd/tst
> 2014-03-25 12:48:14.318167 7f44717f7760 2 auth: KeyRing::load: loaded key
> file /etc/ceph/ceph.client.admin.keyring
> rbd: add failed: (5) Input/output error
>
> [root@ceph01 cluster]# cat /var/log/messages | tail
> Mar 25 12:45:06 ceph01 kernel: libceph: osdc handle_map corrupt msg
> Mar 25 12:45:06 ceph01 kernel: libceph: mon2 192.168.100.203:6789 session
> established
> Mar 25 12:46:33 ceph01 kernel: libceph: client11240 fsid
> 10b46114-ac17-404e-99e3-69b34b85c901
> Mar 25 12:46:33 ceph01 kernel: libceph: got v 13 cv 11 > 9 of ceph_pg_pool
> Mar 25 12:46:33 ceph01 kernel: libceph: osdc handle_map corrupt msg
> Mar 25 12:46:33 ceph01 kernel: libceph: mon2 192.168.100.203:6789 session
> established
> Mar 25 12:48:14 ceph01 kernel: libceph: client11313 fsid
> 10b46114-ac17-404e-99e3-69b34b85c901
> Mar 25 12:48:14 ceph01 kernel: libceph: got v 13 cv 11 > 9 of ceph_pg_pool
> Mar 25 12:48:14 ceph01 kernel: libceph: osdc handle_map corrupt msg
> Mar 25 12:48:14 ceph01 kernel: libceph: mon0 192.168.100.201:6789 session
> established
>
> I do not really understand this error. CRUSH correct.
osdmap. Strictly speaking, kernel client doesn't fully support caching
yet. The reason, once again, is that tiering agent and some of the
hit_set stuff are post 3.14. It will probably make 3.15, but I'll have
to get back to you on that.
Sorry for not letting you know earlier, I got confused.
Thanks,
Ilya
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com