Have you updated your "/etc/multipath.conf" as documented here [1]? You should have ALUA configured but it doesn't appear that's the case w/ your provided output. On Wed, Oct 16, 2019 at 11:36 PM 展荣臻(信泰) <zhanrzh_xt@xxxxxxxxxxxxxx> wrote: > > > > > > -----原始邮件----- > > 发件人: "Jason Dillaman" <jdillama@xxxxxxxxxx> > > 发送时间: 2019-10-17 09:54:30 (星期四) > > 收件人: "展荣臻(信泰)" <zhanrzh_xt@xxxxxxxxxxxxxx> > > 抄送: dillaman <dillaman@xxxxxxxxxx>, ceph-users <ceph-users@xxxxxxxxxxxxxx> > > 主题: Re: ceph iscsi question > > > > On Wed, Oct 16, 2019 at 9:52 PM 展荣臻(信泰) <zhanrzh_xt@xxxxxxxxxxxxxx> wrote: > > > > > > > > > > > > > > > > -----原始邮件----- > > > > 发件人: "Jason Dillaman" <jdillama@xxxxxxxxxx> > > > > 发送时间: 2019-10-16 20:33:47 (星期三) > > > > 收件人: "展荣臻(信泰)" <zhanrzh_xt@xxxxxxxxxxxxxx> > > > > 抄送: ceph-users <ceph-users@xxxxxxxxxxxxxx> > > > > 主题: Re: ceph iscsi question > > > > > > > > On Wed, Oct 16, 2019 at 2:35 AM 展荣臻(信泰) <zhanrzh_xt@xxxxxxxxxxxxxx> wrote: > > > > > > > > > > hi,all > > > > > we deploy ceph with ceph-ansible.osds,mons and daemons of iscsi runs in docker. > > > > > I create iscsi target according to https://docs.ceph.com/docs/luminous/rbd/iscsi-target-cli/. > > > > > I discovered and logined iscsi target on another host,as show below: > > > > > > > > > > [root@node1 tmp]# iscsiadm -m discovery -t sendtargets -p 192.168.42.110 > > > > > 192.168.42.110:3260,1 iqn.2003-01.com.teamsun.iscsi-gw:iscsi-igw > > > > > 192.168.42.111:3260,2 iqn.2003-01.com.teamsun.iscsi-gw:iscsi-igw > > > > > [root@node1 tmp]# iscsiadm -m node -T iqn.2003-01.com.teamsun.iscsi-gw:iscsi-igw -p 192.168.42.110 -l > > > > > Logging in to [iface: default, target: iqn.2003-01.com.teamsun.iscsi-gw:iscsi-igw, portal: 192.168.42.110,3260] (multiple) > > > > > Login to [iface: default, target: iqn.2003-01.com.teamsun.iscsi-gw:iscsi-igw, portal: 192.168.42.110,3260] successful. > > > > > > > > > > /dev/sde is mapped,when i mkfs.xfs -f /dev/sde, an Error occur, > > > > > > > > > > [root@node1 tmp]# mkfs.xfs -f /dev/sde > > > > > meta-data=/dev/sde isize=512 agcount=4, agsize=1966080 blks > > > > > = sectsz=512 attr=2, projid32bit=1 > > > > > = crc=1 finobt=0, sparse=0 > > > > > data = bsize=4096 blocks=7864320, imaxpct=25 > > > > > = sunit=0 swidth=0 blks > > > > > naming =version 2 bsize=4096 ascii-ci=0 ftype=1 > > > > > log =internal log bsize=4096 blocks=3840, version=2 > > > > > = sectsz=512 sunit=0 blks, lazy-count=1 > > > > > realtime =none extsz=4096 blocks=0, rtextents=0 > > > > > existing superblock read failed: Input/output error > > > > > mkfs.xfs: pwrite64 failed: Input/output error > > > > > > > > > > message in /var/log/messages: > > > > > Oct 16 14:01:44 localhost kernel: Dev sde: unable to read RDB block 0 > > > > > Oct 16 14:01:44 localhost kernel: sde: unable to read partition table > > > > > Oct 16 14:02:17 localhost kernel: Dev sde: unable to read RDB block 0 > > > > > Oct 16 14:02:17 localhost kernel: sde: unable to read partition table > > > > > > > > > > we use Luminous ceph. > > > > > what cause this error? how debug it.any suggestion is appreciative. > > > > > > > > Please use the associated multipath device, not the raw block device. > > > > > > > hi,Jason > > > Thanks for your reply > > > The multipath device is the same error as raw block device. > > > > > > > What does "multipath -ll" show? > > > [root@node1 ~]# multipath -ll > mpathf (36001405366100aeda2044f286329b57a) dm-2 LIO-ORG ,TCMU device > size=30G features='0' hwhandler='0' wp=rw > |-+- policy='service-time 0' prio=0 status=enabled > | `- 13:0:0:0 sde 8:64 failed faulty running > `-+- policy='service-time 0' prio=0 status=enabled > `- 14:0:0:0 sdf 8:80 failed faulty running > [root@node1 ~]# > > I don't know if it is related to that our all daemons run in docker while docker runs on kvm. > > > > > > > [1] https://docs.ceph.com/ceph-prs/30912/rbd/iscsi-initiator-linux/ -- Jason _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com