Re: ceph iscsi question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





> -----原始邮件-----
> 发件人: "Jason Dillaman" <jdillama@xxxxxxxxxx>
> 发送时间: 2019-10-16 20:33:47 (星期三)
> 收件人: "展荣臻(信泰)" <zhanrzh_xt@xxxxxxxxxxxxxx>
> 抄送: ceph-users <ceph-users@xxxxxxxxxxxxxx>
> 主题: Re:  ceph iscsi question
> 
> On Wed, Oct 16, 2019 at 2:35 AM 展荣臻(信泰) <zhanrzh_xt@xxxxxxxxxxxxxx> wrote:
> >
> > hi,all
> >   we deploy ceph with ceph-ansible.osds,mons and daemons of iscsi runs in docker.
> >   I create iscsi target according to https://docs.ceph.com/docs/luminous/rbd/iscsi-target-cli/.
> >   I discovered and logined iscsi target on another host,as show below:
> >
> > [root@node1 tmp]# iscsiadm -m discovery -t sendtargets -p 192.168.42.110
> > 192.168.42.110:3260,1 iqn.2003-01.com.teamsun.iscsi-gw:iscsi-igw
> > 192.168.42.111:3260,2 iqn.2003-01.com.teamsun.iscsi-gw:iscsi-igw
> > [root@node1 tmp]# iscsiadm -m node -T iqn.2003-01.com.teamsun.iscsi-gw:iscsi-igw -p 192.168.42.110 -l
> > Logging in to [iface: default, target: iqn.2003-01.com.teamsun.iscsi-gw:iscsi-igw, portal: 192.168.42.110,3260] (multiple)
> > Login to [iface: default, target: iqn.2003-01.com.teamsun.iscsi-gw:iscsi-igw, portal: 192.168.42.110,3260] successful.
> >
> >  /dev/sde is mapped,when i mkfs.xfs -f /dev/sde, an Error occur,
> >
> > [root@node1 tmp]# mkfs.xfs -f /dev/sde
> > meta-data=/dev/sde               isize=512    agcount=4, agsize=1966080 blks
> >          =                       sectsz=512   attr=2, projid32bit=1
> >          =                       crc=1        finobt=0, sparse=0
> > data     =                       bsize=4096   blocks=7864320, imaxpct=25
> >          =                       sunit=0      swidth=0 blks
> > naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
> > log      =internal log           bsize=4096   blocks=3840, version=2
> >          =                       sectsz=512   sunit=0 blks, lazy-count=1
> > realtime =none                   extsz=4096   blocks=0, rtextents=0
> > existing superblock read failed: Input/output error
> > mkfs.xfs: pwrite64 failed: Input/output error
> >
> > message in /var/log/messages:
> > Oct 16 14:01:44 localhost kernel: Dev sde: unable to read RDB block 0
> > Oct 16 14:01:44 localhost kernel: sde: unable to read partition table
> > Oct 16 14:02:17 localhost kernel: Dev sde: unable to read RDB block 0
> > Oct 16 14:02:17 localhost kernel: sde: unable to read partition table
> >
> > we use Luminous ceph.
> > what cause this error? how debug it.any suggestion is appreciative.
> 
> Please use the associated multipath device, not the raw block device.
> 
hi,Jason
  Thanks for your reply
  The multipath device is the same error as raw block device.




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux