Re: iSCSI: tcmu-runner can't open images?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Matthias,

We encountered a similar issue, it turned out to be because we used another pool then rbd with gwcli.

We got it fixed and it should be in a pull request upstream.

/Heðin

----- Original Message -----
From: "Matthias Leopold" <matthias.leopold@xxxxxxxxxxxxxxxx>
To: ceph-users@xxxxxxxxxxxxxx
Sent: Thursday, 2 November, 2017 15:34:45
Subject:  iSCSI: tcmu-runner can't open images?

Hi,

i'm trying to set up iSCSI gateways for a Ceph luminous cluster using 
these instructions:
http://docs.ceph.com/docs/master/rbd/iscsi-target-cli/

When arriving at step "Configuring: Adding a RADOS Block Device (RBD)" 
things start to get messy: there is no "disks" entry in my target path, 
so i can't "cd 
/iscsi-target/iqn.2003-01.com.redhat.iscsi-gw:<target_name>/disks/". 
When i try to create a disk in the top level "/disks" path ("/disks> 
create pool=ovirt-default image=itest04 size=50g") gwcli crashes with 
"ValueError: No JSON object could be decoded" (there is more output when 
using debug but i don't think it matters). More interesting is 
/var/log/tcmu-runner.log, it says consistently

[DEBUG] handle_netlink:207: cmd 1. Got header version 2. Supported 2.
[DEBUG] dev_added:768 rbd/ovirt-default.itest04: Got block_size 512, 
size in bytes 53687091200
[DEBUG] tcmu_rbd_open:581 rbd/ovirt-default.itest04: tcmu_rbd_open 
config rbd/ovirt-default/itest04/osd_op_timeout=30 block size 512 num 
lbas 104857600.
[DEBUG] timer_check_and_set_def:234 rbd/ovirt-default.itest04: The 
cluster's default osd op timeout(30.000000), osd heartbeat grace(20) 
interval(6)
[DEBUG] timer_check_and_set_def:242 rbd/ovirt-default.itest04: The osd 
op timeout will remain the default value: 30.000000
[ERROR] tcmu_rbd_image_open:318 rbd/ovirt-default.itest04: Could not 
open image itest04/osd_op_timeout=30. (Err -2)
[ERROR] add_device:496: handler open failed for uio0

in the moment of the crash. The funny thing is, the image is created in 
the ceph pool 'ovirt-default', only gwcli/tcmu-runner can't read it. The 
"/disks" path in gwcli and the "/backstores/user:rbd" path in targetcli 
are always empty.

I haven't gotten past this, can anybody tell me what's wrong?

I tried 2 different tcmu binaries, one self compiled from sources from 
https://github.com/open-iscsi/tcmu-runner/tree/v1.3.0-rc4, the other rpm 
binaries from https://shaman.ceph.com/repos/tcmu-runner/ (ID: 58311). 
The error is the same with both versions.

My setup:
- CentOS 7.4
- kernel 3.10.0-693.2.2.el7.x86_64
- iscsi gw co-located on a ceph OSD node
- ceph programs from http://download.ceph.com/rpm-luminous
- python-rtslib-2.1.fb64 installed with "pip install"
- ceph-iscsi-config-2.3 installed as rpm compiled from 
https://github.com/ceph/ceph-iscsi-config/tree/2.3
- ceph-iscsi-cli-2.5 installed as rpm from 
https://github.com/ceph/ceph-iscsi-cli/tree/2.5

thx a lot for help
matthias
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux