(krbd)rbd map failed on aarch64 by jewel

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,all:

   I test ceph-10.2.5.tar.gz (download on download.ceph.com/tarballs)
on AARCH64.

  ceph cluster is OK:

    cluster 7bb94b46-1111-42e9-aaca-6ed9832b73a5
     health HEALTH_OK
     monmap e1: 1 mons at {openSUSE=10.10.109.23:6789/0}
            election epoch 4, quorum 0 openSUSE
     osdmap e29: 4 osds: 4 up, 4 in
            flags sortbitwise,require_jewel_osds
      pgmap v135: 64 pgs, 1 pools, 14512 kB data, 11 objects
            16609 MB used, 1845 GB / 1862 GB avail
                  64 active+clean

 the rbd create hello --size 10240
I can put data into hello by rados, then
rbd map hello, but failed:
 rbd: sysfs write failed
In some cases useful info is found in syslog - try "dmesg | tail" or so.
rbd: map failed: (110) Connection timed out
at that time,the ceph cluster also is OK.

kernel:
Linux openSUSE 4.4.2 #77 SMP Sat Dec 24 14:56:18 CST 2016 aarch64
aarch64 aarch64 GNU/Linux

dmesg:
[  117.694839] libceph: mon0 10.10.109.23:6789 socket closed (con state OPEN)
[  127.694839] libceph: mon0 10.10.109.23:6789 socket closed (con state OPEN)
[  137.694833] libceph: mon0 10.10.109.23:6789 socket closed (con state OPEN)
[  147.694874] libceph: mon0 10.10.109.23:6789 socket closed (con state OPEN)
[ 1674.235228] libceph: mon0 10.10.109.23:6789 socket closed (con state OPEN)

mon.log:
2016-12-24 15:31:09.302965 ffffb0628310  0
mon.openSUSE@0(leader).data_health(4) update_stats avail 39% total
100664 MB, used 55489 MB, avail 40039 MB
2016-12-24 15:31:13.770264 ffffad618310  0 -- 10.10.109.23:6789/0 >>
10.10.103.23:0/2754914976 pipe(0xaaaaf9904800 sd=24 :6789 s=2 pgs=863
cs=1 l=1 c=0xaaaaf965ce80).reader got bad header crc 0 != 1679376416
2016-12-24 15:31:23.770307 ffffad618310  0 -- 10.10.109.23:6789/0 >>
10.10.103.23:0/2754914976 pipe(0xaaaaf9904800 sd=24 :6789 s=2 pgs=864
cs=1 l=1 c=0xaaaaf97cad00).reader got bad header crc 0 != 1679376416
2016-12-24 15:31:33.770205 ffffad618310  0 -- 10.10.109.23:6789/0 >>
10.10.103.23:0/2754914976 pipe(0xaaaaf9903400 sd=24 :6789 s=2 pgs=865
cs=1 l=1 c=0xaaaaf97cab80).reader got bad header crc 0 != 1679376416
2016-12-24 15:31:43.770321 ffffad618310  0 -- 10.10.109.23:6789/0 >>
10.10.103.23:0/2754914976 pipe(0xaaaaf9903400 sd=24 :6789 s=2 pgs=866
cs=1 l=1 c=0xaaaaf97ca700).reader got bad header crc 0 != 1679376416
2016-12-24 15:31:53.770259 ffffad618310  0 -- 10.10.109.23:6789/0 >>
10.10.103.23:0/2754914976 pipe(0xaaaaf9904800 sd=24 :6789 s=2 pgs=867
cs=1 l=1 c=0xaaaaf97c8d80).reader got bad header crc 0 != 1679376416
2016-12-24 15:32:03.770263 ffffad618310  0 -- 10.10.109.23:6789/0 >>
10.10.103.23:0/2754914976 pipe(0xaaaaf9904800 sd=24 :6789 s=2 pgs=868
cs=1 l=1 c=0xaaaaf97cb480).reader got bad header crc 0 != 1679376416
2016-12-24 15:32:09.303148 ffffb0628310  0
mon.openSUSE@0(leader).data_health(4) update_stats avail 39% total
100664 MB, used 55489 MB, avail 40039 MB
2016-12-24 15:32:13.770223 ffffad618310  0 -- 10.10.109.23:6789/0 >>
10.10.103.23:0/2754914976 pipe(0xaaaaf9903400 sd=24 :6789 s=2 pgs=869
cs=1 l=1 c=0xaaaaf97ca280).reader got bad header crc 0 != 1679376416
2016-12-24 15:32:23.770156 ffffad618310  0 -- 10.10.109.23:6789/0 >>
10.10.103.23:0/2754914976 pipe(0xaaaaf9903400 sd=24 :6789 s=2 pgs=870
cs=1 l=1 c=0xaaaaf97c9f80).reader got bad header crc 0 != 1679376416

I test v0.94 installed by ceph-deploy, then rbd map is OK! Why? How
can I solve it?
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux