Could not find module rbd. CentOs 6.4

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I am deploying firefly version on CentOs 6.4. I am following quick 
installation instructions available at ceph.com.
I have my customized kernel version in CentOs 6.4 which is 2.6.32.

I am able to create basic Ceph storage cluster with active+clean state. 
Now I am trying to create block device image on ceph client but it is 
giving messages as shown below:

[ceph at ceph-client1 ~]$ rbd create foo --size 1024
2014-07-25 22:31:48.519218 7f6721d43700  0 -- 172.17.35.20:0/1003053 >> 
172.17.35.22:6800/1875 pipe(0x6a7c50 sd=4 :0 s=1 pgs=0 cs=0 l=1 
c=0x6a8050).fault
2014-07-25 22:32:18.536771 7f6721b41700  0 -- 172.17.35.20:0/1003053 >> 
172.17.35.22:6800/1875 pipe(0x7f6718006310 sd=5 :0 s=1 pgs=0 cs=0 l=1 
c=0x7f6718006580).fault
2014-07-25 22:33:09.598763 7f6721b41700  0 -- 172.17.35.20:0/1003053 >> 
172.17.35.22:6800/1875 pipe(0x7f67180063e0 sd=5 :0 s=1 pgs=0 cs=0 l=1 
c=0x7f6718007e70).fault
2014-07-25 22:34:08.621655 7f6721b41700  0 -- 172.17.35.20:0/1003053 >> 
172.17.35.22:6800/1875 pipe(0x7f6718007e70 sd=5 :0 s=1 pgs=0 cs=0 l=1 
c=0x7f67180080e0).fault
2014-07-25 22:35:19.581978 7f6721b41700  0 -- 172.17.35.20:0/1003053 >> 
172.17.35.22:6800/1875 pipe(0x7f6718007e70 sd=5 :0 s=1 pgs=0 cs=0 l=1 
c=0x7f67180080e0).fault
2014-07-25 22:36:23.694665 7f6721b41700  0 -- 172.17.35.20:0/1003053 >> 
172.17.35.22:6800/1875 pipe(0x7f6718007e70 sd=5 :0 s=1 pgs=0 cs=0 l=1 
c=0x7f67180080e0).fault
2014-07-25 22:37:28.868293 7f6721b41700  0 -- 172.17.35.20:0/1003053 >> 
172.17.35.22:6800/1875 pipe(0x7f6718007e70 sd=5 :0 s=1 pgs=0 cs=0 l=1 
c=0x7f67180080e0).fault
2014-07-25 22:38:29.159830 7f6721b41700  0 -- 172.17.35.20:0/1003053 >> 
172.17.35.22:6800/1875 pipe(0x7f6718007e70 sd=5 :0 s=1 pgs=0 cs=0 l=1 
c=0x7f67180080e0).fault
2014-07-25 22:39:28.854441 7f6721b41700  0 -- 172.17.35.20:0/1003053 >> 
172.17.35.22:6800/1875 pipe(0x7f6718001db0 sd=5 :0 s=1 pgs=0 cs=0 l=1 
c=0x7f6718006990).fault
2014-07-25 22:40:14.581055 7f6721b41700  0 -- 172.17.35.20:0/1003053 >> 
172.17.35.22:6800/1875 pipe(0x7f6718001ac0 sd=5 :0 s=1 pgs=0 cs=0 l=1 
c=0x7f671800c950).fault
2014-07-25 22:41:03.794903 7f6721b41700  0 -- 172.17.35.20:0/1003053 >> 
172.17.35.22:6800/1875 pipe(0x7f6718004d30 sd=5 :0 s=1 pgs=0 cs=0 l=1 
c=0x7f671800c950).fault
2014-07-25 22:42:12.537442 7f6721b41700  0 -- 172.17.35.20:0/1003053 >> 
172.17.35.22:6800/1875 pipe(0x6a4640 sd=5 :0 s=1 pgs=0 cs=0 l=1 
c=0x6a4a00).fault
2014-07-25 22:43:18.912430 7f6721b41700  0 -- 172.17.35.20:0/1003053 >> 
172.17.35.22:6800/1875 pipe(0x7f6718008300 sd=5 :0 s=1 pgs=0 cs=0 l=1 
c=0x7f67180080e0).fault
2014-07-25 22:44:24.129258 7f6721b41700  0 -- 172.17.35.20:0/1003053 >> 
172.17.35.22:6800/1875 pipe(0x7f6718008300 sd=5 :0 s=1 pgs=0 cs=0 l=1 
c=0x7f6718008f80).fault
2014-07-25 22:45:29.174719 7f6721b41700  0 -- 172.17.35.20:0/1003053 >> 
172.17.35.22:6800/1875 pipe(0x7f671800a150 sd=5 :0 s=1 pgs=0 cs=0 l=1 
c=0x7f671800a620).fault
2014-07-25 22:46:34.032246 7f6721b41700  0 -- 172.17.35.20:0/1003053 >> 
172.17.35.22:6800/1875 pipe(0x7f6718008390 sd=5 :0 s=1 pgs=0 cs=0 l=1 
c=0x7f671800a620).fault
2014-07-25 22:47:39.551973 7f6721b41700  0 -- 172.17.35.20:0/1003053 >> 
172.17.35.22:6800/1875 pipe(0x7f6718008390 sd=5 :0 s=1 pgs=0 cs=0 l=1 
c=0x7f67180077e0).fault
2014-07-25 22:48:39.342226 7f6721b41700  0 -- 172.17.35.20:0/1003053 >> 
172.17.35.22:6800/1875 pipe(0x7f6718001db0 sd=5 :0 s=1 pgs=0 cs=0 l=1 
c=0x7f6718003040).fault

I am not sure whether block device image has been created or not. 
Further I tried below command which fails:
[ceph at ceph-client1 ~]$ sudo rbd map foo
ERROR: modinfo: could not find module rbd
FATAL: Module rbd not found.
rbd: modprobe rbd failed! (256)

If I check the health of cluster it looks fine.
[ceph at node1 ~]$ ceph -s
     cluster 98f22f5d-783b-43c2-8ae7-b97a715c9c86
      health HEALTH_OK
      monmap e1: 1 mons at {node1=172.17.35.17:6789/0}, election epoch 
1, quorum 0 node1
      osdmap e5972: 3 osds: 3 up, 3 in
       pgmap v20011: 192 pgs, 3 pools, 142 bytes data, 2 objects
             190 MB used, 45856 MB / 46046 MB avail
                  192 active+clean

Please let me know if I am doing anything wrong.

Regards,
Pratik Rupala


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux