Ilya, hi. Maybe you have the required patches for the kernel?
2014-03-25 14:51 GMT+04:00 Ирек Фасихов <malmyzh@xxxxxxxxx>:
Yep, so works.2014-03-25 14:45 GMT+04:00 Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>:
On Tue, Mar 25, 2014 at 12:00 PM, Ирек Фасихов <malmyzh@xxxxxxxxx> wrote:No, the problem here is that the pool with hit_set stuff set still
> Hmmm, create another image in another pool. Pool without cache tier.
>
> [root@ceph01 cluster]# rbd create test/image --size 102400
> [root@ceph01 cluster]# rbd -p test ls -l
> NAME SIZE PARENT FMT PROT LOCK
> image 102400M 1
> [root@ceph01 cluster]# ceph osd dump | grep test
> pool 4 'test' replicated size 3 min_size 2 crush_ruleset 0 object_hash
> rjenkins pg_num 100 pgp_num 100 last_change 2049 owner 0 flags hashpspool
> stripe_width 0
>
> Get the same error...
>
> [root@ceph01 cluster]# rbd map -p test image
> rbd: add failed: (5) Input/output error
>
> Mar 25 13:53:56 ceph01 kernel: libceph: client11343 fsid
> 10b46114-ac17-404e-99e3-69b34b85c901
> Mar 25 13:53:56 ceph01 kernel: libceph: got v 13 cv 11 > 9 of ceph_pg_pool
> Mar 25 13:53:56 ceph01 kernel: libceph: osdc handle_map corrupt msg
> Mar 25 13:53:56 ceph01 kernel: libceph: mon0 192.168.100.201:6789 session
> established
>
> Maybe I'm doing wrong?
exists and therefore is present in the osdmap. You'll have to remove
that pool with something like
# I assume "cache" is the name of the cache pool
ceph osd tier remove-overlay rbd
ceph osd tier remove rbd cache
ceph osd pool delete cache cache --yes-i-really-really-mean-it
in order to be able to map test/image.
Thanks,
Ilya
--С уважением, Фасихов Ирек НургаязовичМоб.: +79229045757
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com