Thank You Ilya! Here's the output of dmesg during command execution: rbd: loaded rbd (rados block device) libceph: mon1 192.168.101.43:6789 feature set mismatch, my 4a042a42 < server's 2404a042a42, missing 24000000000 libceph: mon1 192.168.101.43:6789 socket error on read libceph: mon2 192.168.100.42:6789 feature set mismatch, my 4a042a42 < server's 2404a042a42, missing 24000000000 libceph: mon2 192.168.100.42:6789 socket error on read libceph: mon0 192.168.101.41:6789 feature set mismatch, my 4a042a42 < server's 2404a042a42, missing 24000000000 libceph: mon0 192.168.101.41:6789 socket error on read libceph: mon1 192.168.101.43:6789 feature set mismatch, my 4a042a42 < server's 2404a042a42, missing 24000000000 libceph: mon1 192.168.101.43:6789 socket error on read libceph: mon1 192.168.101.43:6789 feature set mismatch, my 4a042a42 < server's 2404a042a42, missing 24000000000 libceph: mon1 192.168.101.43:6789 socket error on read libceph: mon2 192.168.100.42:6789 feature set mismatch, my 4a042a42 < server's 2404a042a42, missing 24000000000 libceph: mon2 192.168.100.42:6789 socket error on read libceph: mon1 192.168.101.43:6789 feature set mismatch, my 4a042a42 < server's 2404a042a42, missing 24000000000 libceph: mon1 192.168.101.43:6789 socket error on read libceph: mon2 192.168.100.42:6789 feature set mismatch, my 4a042a42 < server's 2404a042a42, missing 24000000000 libceph: mon2 192.168.100.42:6789 socket error on read libceph: mon1 192.168.101.43:6789 feature set mismatch, my 4a042a42 < server's 2404a042a42, missing 24000000000 libceph: mon1 192.168.101.43:6789 socket error on read libceph: mon2 192.168.100.42:6789 feature set mismatch, my 4a042a42 < server's 2404a042a42, missing 24000000000 libceph: mon2 192.168.100.42:6789 socket error on read -----Original Message----- From: Ilya Dryomov [mailto:ilya.dryomov@xxxxxxxxxxx] Sent: Friday, October 10, 2014 1:10 AM To: Aquino, Ben O Cc: ceph-users@xxxxxxxxxxxxxx; Ferber, Dan; Barnes, Thomas J Subject: Re: rbd map vsmpool_hp1/rbd9 --id admin -->rbd: add failed: (5) Input/output error On Fri, Oct 10, 2014 at 12:48 AM, Aquino, Ben O <ben.o.aquino@xxxxxxxxx> wrote: > Hello Ceph Users: > > > > Ceph baremetal client attempting to map device volume via kernel RBD > Driver, resulting in unable to map device volume and outputs I/O error. > > This is Ceph client only, no MDS,OSD or MON running…see I/O error > output below. > > > > > > Client Host Linux Kernel Version : > > [root@root ceph]# uname -a > > Linux root 3.10.25-11.el6.centos.alt.x86_64 #1 SMP Fri Dec 27 21:44:15 > UTC > 2013 x86_64 x86_64 x86_64 GNU/Linux > > > > Ceph Version: > > [root@root ceph]# ceph -v > > ceph version 0.80.1 (a38fe1169b6d2ac98b427334c12d7cf81f809b74) > > > > Check Kernel RBD driver: > > [root@root ceph]# locate rbd.ko > > /lib/modules/3.10.25-11.el6.centos.alt.x86_64/kernel/drivers/block/rbd > .ko > > /lib/modules/3.10.25-11.el6.centos.alt.x86_64/kernel/drivers/block/drb > d/drbd.ko > > > > Check Client to Ceph-Server Connections: > > [root@root ceph]# ceph osd lspools > > 0 data,1 metadata,2 rbd,3 vsmpool_hp1,4 vsmpool_perf1,5 > vsmpool_vperf1,6 > openstack_hp1,7 openstack_perf1,8 openstack_vperf1,9 vsmpool_perf2,10 > vsmpool_hp2,11 vsmpool_vperf2,12 testopnstack,13 ec_perf_pool,14 > ec_perf_pool1,15 ec_perf_pool2,16 ec_hiperf_pool1,17 ec_valperf_pool1, > > > > Created RBD: > > [root@root ceph]# rbd create rbd9 --size 104800 --pool vsmpool_hp1 > --id admin > > > > Check RBD: > > [root@root ceph]# rbd ls vsmpool_hp1 > > rbd1 > > rbd2 > > rbd3 > > rbd4 > > rbd5 > > rbd6 > > rbd7 > > rbd8 > > rbd9 > > > > Display RBD INFO: > > [root@root ceph]# rbd info vsmpool_hp1/rbd9 > > rbd image 'rbd9': > > size 102 GB in 26200 objects > > order 22 (4096 kB objects) > > block_name_prefix: rb.0.227915.238e1f29 > > format: 1 > > > > Map RBD: > > [root@root ceph]# rbd map vsmpool_hp1/rbd9 --id admin > > rbd: add failed: (5) Input/output error Is there anything in dmesg? I'm pretty sure you'll see something like "feature set mismatch" in there if you look. Please paste that. Thanks, Ilya _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com