Hi, I am setting up a ceph cluster for some experimentation. The cluster is setup successfully. But, When I try running rbd map on the host, the kernel crashes(system hangs) and I need to do a hard reset for it to recover. Below is my setup. ? All my nodes have Linux kernel 3.5 with Ubuntu 12.04. Iam installing the emperor version of Ceph. Below are the ceph cluster status: *root at CephMon:~# ceph osd tree* *# id weight type name up/down reweight* *-1 2.2 root default* *-2 1.66 host cephnode2* *0 0.9 osd.0 up 1* *3 0.76 osd.3 up 1* *-3 0.54 host cephnode4* *1 0.27 osd.1 up 1* *2 0.27 osd.2 up 1* *root at CephMon:~#* *root at CephMon:~# ceph osd dump* *epoch 65* *fsid bef84776-a957-495e-be34-c353eb76c3d7* *created 2014-05-27 08:53:59.112200* *modified 2014-05-27 15:05:42.742630* *flags * *pool 0 'data' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 38 owner 0 flags hashpspool crash_replay_interval 45 stripe_width 0* *pool 1 'metadata' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 36 owner 0 flags hashpspool stripe_width 0* *pool 2 'rbd' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 96 pgp_num 96 last_change 61 owner 0 flags hashpspool stripe_width 0* *max_osd 4* *osd.0 up in weight 1 up_from 4 up_thru 61 down_at 0 last_clean_interval [0,0) 10.223.169.166:6800/26254 <http://10.223.169.166:6800/26254> 10.223.169.166:6801/26254 <http://10.223.169.166:6801/26254> 10.223.169.166:6802/26254 <http://10.223.169.166:6802/26254> 10.223.169.166:6803/26254 <http://10.223.169.166:6803/26254> exists,up 1aefbfb2-a220-4f0e-9d91-1b9344717337* *osd.1 up in weight 1 up_from 8 up_thru 61 down_at 0 last_clean_interval [0,0) 10.223.169.201:6800/32211 <http://10.223.169.201:6800/32211> 10.223.169.201:6801/32211 <http://10.223.169.201:6801/32211> 10.223.169.201:6802/32211 <http://10.223.169.201:6802/32211> 10.223.169.201:6803/32211 <http://10.223.169.201:6803/32211> exists,up 077d4fe2-e8f7-42ba-a569-87efc7c11fbe* *osd.2 up in weight 1 up_from 12 up_thru 61 down_at 0 last_clean_interval [0,0) 10.223.169.201:6805/33333 <http://10.223.169.201:6805/33333> 10.223.169.201:6806/33333 <http://10.223.169.201:6806/33333> 10.223.169.201:6807/33333 <http://10.223.169.201:6807/33333> 10.223.169.201:6808/33333 <http://10.223.169.201:6808/33333> exists,up 734fb969-1bb6-46ad-91c3-60b4647c90ac* *osd.3 up in weight 1 up_from 48 up_thru 61 down_at 0 last_clean_interval [0,0) 10.223.169.166:6805/27859 <http://10.223.169.166:6805/27859> 10.223.169.166:6806/27859 <http://10.223.169.166:6806/27859> 10.223.169.166:6807/27859 <http://10.223.169.166:6807/27859> 10.223.169.166:6808/27859 <http://10.223.169.166:6808/27859> exists,up 5b479abb-168a-411d-ba4e-a37e63fdfbd4* The following are the rbd commands used in the host: *rbd create test --size 1024 --pool rbd* *modprobe rbd* *rbd map test --pool rbd* Any pointers to this issue would be of great help. Thanks in Advance, Sharmila -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140527/1567fc6f/attachment.htm> -------------- next part -------------- A non-text attachment was scrubbed... Name: Capture.JPG Type: image/jpeg Size: 22300 bytes Desc: not available URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140527/1567fc6f/attachment.jpeg>