rbd map vsmpool_hp1/rbd9 --id admin -->rbd: add failed: (5) Input/output error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Ceph Users:

 

Ceph baremetal client attempting to map device volume via kernel RBD Driver, resulting in unable to map device volume and outputs I/O error.

This is Ceph client only, no MDS,OSD or MON running…see I/O error output below.

 

 

Client Host Linux Kernel Version :

[root@root ceph]# uname -a

Linux root 3.10.25-11.el6.centos.alt.x86_64 #1 SMP Fri Dec 27 21:44:15 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

 

Ceph Version:

[root@root ceph]# ceph -v

ceph version 0.80.1 (a38fe1169b6d2ac98b427334c12d7cf81f809b74)

 

Check Kernel RBD driver:

[root@root ceph]# locate rbd.ko

/lib/modules/3.10.25-11.el6.centos.alt.x86_64/kernel/drivers/block/rbd.ko

/lib/modules/3.10.25-11.el6.centos.alt.x86_64/kernel/drivers/block/drbd/drbd.ko

 

Check Client to Ceph-Server Connections:

[root@root ceph]# ceph osd lspools

0 data,1 metadata,2 rbd,3 vsmpool_hp1,4 vsmpool_perf1,5 vsmpool_vperf1,6 openstack_hp1,7 openstack_perf1,8 openstack_vperf1,9 vsmpool_perf2,10 vsmpool_hp2,11 vsmpool_vperf2,12 testopnstack,13 ec_perf_pool,14 ec_perf_pool1,15 ec_perf_pool2,16 ec_hiperf_pool1,17 ec_valperf_pool1,

 

Created RBD:

[root@root ceph]# rbd create rbd9  --size 104800 --pool vsmpool_hp1 --id admin                                                

 

Check RBD:

[root@root ceph]# rbd ls vsmpool_hp1

rbd1

rbd2

rbd3

rbd4

rbd5

rbd6

rbd7

rbd8

rbd9

 

Display RBD INFO:

[root@root ceph]# rbd info vsmpool_hp1/rbd9

rbd image 'rbd9':

        size 102 GB in 26200 objects

        order 22 (4096 kB objects)

        block_name_prefix: rb.0.227915.238e1f29

        format: 1

 

Map RBD:

[root@root ceph]# rbd map vsmpool_hp1/rbd9 --id admin                                                                         

rbd: add failed: (5) Input/output error

 

 

Thank You In advance for sharing any possible solution to this error.

 

Regards,

-Ben

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux