Excellent news
Many thanks for all your efforts
If you do not mind, please confirm the following steps ( for centos 7, kernel version 3.10.0-693.11.6.el7.x86_64)
- download and Install the RPMs from x86_64 repositories you provided
- do a git clone and, if new version available "pip install . --upgrade" for
ceph-iscsi-cli
ceph-iscsi-config
rtslib-fb
targetcli-fb
- reboot
Steven
On 22 January 2018 at 08:53, Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
The v4.13-based kernel with the necessary bug fixes and TCMU changes
is available here [1] and tcmu-runner v1.3.0 is available here [2].
[1] https://shaman.ceph.com/repos/kernel/ceph-iscsi-test/
[2] https://shaman.ceph.com/repos/tcmu-runner/master/
--
On Sat, Jan 20, 2018 at 7:33 AM, Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote:
>
>
> Sorry for me asking maybe the obvious but is this the kernel available
> in elrepo? Or a different one?
>
>
>
>
>
> -----Original Message-----
> From: Mike Christie [mailto:mchristi@xxxxxxxxxx]
> Sent: zaterdag 20 januari 2018 1:19
> To: Steven Vacaroaia; Joshua Chen
> Cc: ceph-users
> Subject: Re: iSCSI over RBD
>
> On 01/19/2018 02:12 PM, Steven Vacaroaia wrote:
>> Hi Joshua,
>>
>> I was under the impression that kernel 3.10.0-693 will work with
> iscsi
>>
>
> That kernel works with RHCS 2.5 and below. You need the rpms from that
> or the matching upstream releases. Besides trying to dig out the
> versions and matching things up, the problem with those releases is that
> they were tech previewed or only supports linux initiators.
>
> It looks like you are using the newer upstream tools or RHCS 3.0 tools.
> For them you need the RHEL 7.5 beta or newer kernel or an upstream one.
> For upstream all the patches got merged into the target layer
> maintainer's tree yesterday. A new tcmu-runner release has been made.
> And I just pushed a test kernel with all the patches based on 4.13 (4.14
> had a bug in the login code which is being fixed still) to github, so
> people do not have to wait for the next-next kernel release to come out.
>
> Just give us a couple days for the kernel build to be done, to make the
> needed ceph-iscsi-* release (current version will fail to create rbd
> images with the current tcmu-runner release) and get the documentation
> updated because some links are incorrect and some version info needs to
> be updated.
>
>
>> Unfortunately I still cannot create a disk because qfull_time_out is
>> not supported
>>
>> What am I missing / do it wrong ?
>>
>> 2018-01-19 15:06:45,216 INFO [lun.py:601:add_dev_to_lio()] -
>> (LUN.add_dev_to_lio) Adding image 'rbd.disk2' to LIO
>> 2018-01-19 15:06:45,295 ERROR [lun.py:634:add_dev_to_lio()] - Could
>> not set LIO device attribute cmd_time_out/qfull_time_out for device:
>> rbd.disk2. Kernel not supported. - error(Cannot find attribute:
>> qfull_time_out)
>> 2018-01-19 15:06:45,300 ERROR [rbd-target-api:731:_disk()] - LUN
>> alloc problem - Could not set LIO device attribute
>> cmd_time_out/qfull_time_out for device: rbd.disk2. Kernel not
> supported.
>> - error(Cannot find attribute: qfull_time_out)
>>
>>
>> Many thanks
>>
>> Steven
>>
>> On 4 January 2018 at 22:40, Joshua Chen <cschen@xxxxxxxxxxxxxxxxxxx
>> <mailto:cschen@asiaa.sinica.edu.tw >> wrote:
>>
>> Hello Steven,
>> I am using CentOS 7.4.1708 with kernel 3.10.0-693.el7.x86_64
>> and the following packages:
>>
>> ceph-iscsi-cli-2.5-9.el7.centos.noarch.rpm
>> ceph-iscsi-config-2.3-12.el7.centos.noarch.rpm
>> libtcmu-1.3.0-0.4.el7.centos.x86_64.rpm
>> libtcmu-devel-1.3.0-0.4.el7.centos.x86_64.rpm
>> python-rtslib-2.1.fb64-2.el7.centos.noarch.rpm
>> python-rtslib-doc-2.1.fb64-2.el7.centos.noarch.rpm
>> targetcli-2.1.fb47-0.1.20170815.git5bf3517.el7. centos.noarch.rpm
>> tcmu-runner-1.3.0-0.4.el7.centos.x86_64.rpm
>> tcmu-runner-debuginfo-1.3.0-0.4.el7.centos.x86_64.rpm
>>
>>
>> Cheers
>> Joshua
>>
>>
>> On Fri, Jan 5, 2018 at 2:14 AM, Steven Vacaroaia <stef97@xxxxxxxxx
>> <mailto:stef97@xxxxxxxxx>> wrote:
>>
>> Hi Joshua,
>>
>> How did you manage to use iSCSI gateway ?
>> I would like to do that but still waiting for a patched kernel
>
>>
>> What kernel/OS did you use and/or how did you patch it ?
>>
>> Tahnsk
>> Steven
>>
>> On 4 January 2018 at 04:50, Joshua Chen
>> <cschen@xxxxxxxxxxxxxxxxxxx
> <mailto:cschen@asiaa.sinica.edu.tw >>
>> wrote:
>>
>> Dear all,
>> Although I managed to run gwcli and created some iqns,
> or
>> luns,
>> but I do need some working config example so that my
>> initiator could connect and get the lun.
>>
>> I am familiar with targetcli and I used to do the
>> following ACL style connection rather than password,
>> the targetcli setting tree is here:
>>
>> (or see this page
>> <http://www.asiaa.sinica.edu.tw/~cschen/targetcli.html >)
>>
>> #targetcli ls
>> o- /
>>
> ............................................................ ............
> .................................................
>> [...]
>> o- backstores
>>
> ............................................................ ............
> ......................................
>> [...]
>> | o- block
>>
> ............................................................ ............
> ..........................
>> [Storage Objects: 1]
>> | | o- vmware_5t
>> ..........................................................
>> [/dev/rbd/rbd/vmware_5t (5.0TiB) write-thru activated]
>> | | o- alua
>>
> ............................................................ ............
> ...........................
>> [ALUA Groups: 1]
>> | | o- default_tg_pt_gp
>>
> ............................................................ ...........
>> [ALUA state: Active/optimized]
>> | o- fileio
>>
> ............................................................ ............
> .........................
>> [Storage Objects: 0]
>> | o- pscsi
>>
> ............................................................ ............
> ..........................
>> [Storage Objects: 0]
>> | o- ramdisk
>>
> ............................................................ ............
> ........................
>> [Storage Objects: 0]
>> | o- user:rbd
>>
> ............................................................ ............
> .......................
>> [Storage Objects: 0]
>> o- iscsi
>>
> ............................................................ ............
> ....................................
>> [Targets: 1]
>> | o- iqn.2017-12.asiaa.cephosd1:vmware5t
>>
> ............................................................ ............
> .......
>> [TPGs: 1]
>> | o- tpg1
>>
> ............................................................ ............
> ..........................
>> [gen-acls, no-auth]
>> | o- acls
>>
> ............................................................ ............
> .................................
>> [ACLs: 12]
>> | | o- iqn.1994-05.com.redhat:15dbed23be9e
>>
> ............................................................ ......
>> [Mapped LUNs: 1]
>> | | | o- mapped_lun0
>>
> ............................................................ ............
> .....
>> [lun0 block/vmware_5t (rw)]
>> | | o- iqn.1994-05.com.redhat:15dbed23be9e-ovirt1
>>
> ...........................................................
>> [Mapped LUNs: 1]
>> | | | o- mapped_lun0
>>
> ............................................................ ............
> .....
>> [lun0 block/vmware_5t (rw)]
>> | | o-
>> iqn.1994-05.com.redhat:2af344ba6ae5-ceph-admin-test
>> .................................................. [Mapped
>> LUNs: 1]
>> | | | o- mapped_lun0
>>
> ............................................................ ............
> .....
>> [lun0 block/vmware_5t (rw)]
>> | | o- iqn.1994-05.com.redhat:67669afedddf
>>
> ............................................................ ......
>> [Mapped LUNs: 1]
>> | | | o- mapped_lun0
>>
> ............................................................ ............
> .....
>> [lun0 block/vmware_5t (rw)]
>> | | o- iqn.1994-05.com.redhat:67669afedddf-ovirt3
>>
> ...........................................................
>> [Mapped LUNs: 1]
>> | | | o- mapped_lun0
>>
> ............................................................ ............
> .....
>> [lun0 block/vmware_5t (rw)]
>> | | o- iqn.1994-05.com.redhat:a7c1ec3c43f7
>>
> ............................................................ ......
>> [Mapped LUNs: 1]
>> | | | o- mapped_lun0
>>
> ............................................................ ............
> .....
>> [lun0 block/vmware_5t (rw)]
>> | | o- iqn.1994-05.com.redhat:a7c1ec3c43f7-ovirt2
>>
> ...........................................................
>> [Mapped LUNs: 1]
>> | | | o- mapped_lun0
>>
> ............................................................ ............
> .....
>> [lun0 block/vmware_5t (rw)]
>> | | o-
> iqn.1994-05.com.redhat:b01662ec2129-ceph-node2
>> .......................................................
>> [Mapped LUNs: 1]
>> | | | o- mapped_lun0
>>
> ............................................................ ............
> .....
>> [lun0 block/vmware_5t (rw)]
>> | | o-
> iqn.1994-05.com.redhat:d46b42a1915b-ceph-node3
>> .......................................................
>> [Mapped LUNs: 1]
>> | | | o- mapped_lun0
>>
> ............................................................ ............
> .....
>> [lun0 block/vmware_5t (rw)]
>> | | o-
> iqn.1994-05.com.redhat:e7692a10f661-ceph-node1
>> .......................................................
>> [Mapped LUNs: 1]
>> | | | o- mapped_lun0
>>
> ............................................................ ............
> .....
>> [lun0 block/vmware_5t (rw)]
>> | | o- iqn.1998-01.com.vmware:localhost-0f904dfd
>>
> ............................................................
>> [Mapped LUNs: 1]
>> | | | o- mapped_lun0
>>
> ............................................................ ............
> .....
>> [lun0 block/vmware_5t (rw)]
>> | | o- iqn.1998-01.com.vmware:localhost-6af62e4c
>>
> ............................................................
>> [Mapped LUNs: 1]
>> | | o- mapped_lun0
>>
> ............................................................ ............
> .....
>> [lun0 block/vmware_5t (rw)]
>> | o- luns
>>
> ............................................................ ............
> ..................................
>> [LUNs: 1]
>> | | o- lun0
>> ....................................................
>> [block/vmware_5t (/dev/rbd/rbd/vmware_5t)
> (default_tg_pt_gp)]
>> | o- portals
>>
> ............................................................ ............
> ............................
>> [Portals: 1]
>> | o- 172.20.0.12:3260 <http://172.20.0.12:3260>
>>
> ............................................................ ............
> .........................
>> [OK]
>> o- loopback
>>
> ............................................................ ............
> .................................
>> [Targets: 0]
>> o- xen_pvscsi
>>
> ............................................................ ............
> ...............................
>> [Targets: 0]
>>
>>
>>
>>
>>
>>
>> My targetcli setup procedure is like this, could someone
>> translate it to gwcli equivalent procedure?
>> sorry for asking for this due to lack of documentation and
>> examples.
>> thanks in adavance
>>
>> Cheers
>> Joshua
>>
>>
>>
>>
>> targetcli /backstores/block create name=vmware_5t
>> dev=/dev/rbd/rbd/vmware_5t
>> targetcli /iscsi/ create
> iqn.2017-12.asiaa.cephosd1:vmware5t
>> targetcli
>> /iscsi/iqn.2017-12.asiaa.cephosd1:vmware5t/tpg1/portals
>> delete ip_address=0.0.0.0 ip_port=3260
>>
>> targetcli
>> cd /iscsi/iqn.2017-12.asiaa.cephosd1:vmware5t/tpg1
>> portals/ create 172.20.0.12
>> acls/
>> create
>> iqn.1994-05.com.redhat:e7692a10f661-ceph-node1
>> create
>> iqn.1994-05.com.redhat:b01662ec2129-ceph-node2
>> create
>> iqn.1994-05.com.redhat:d46b42a1915b-ceph-node3
>> create iqn.1994-05.com.redhat:15dbed23be9e
>> create iqn.1994-05.com.redhat:a7c1ec3c43f7
>> create iqn.1994-05.com.redhat:67669afedddf
>> create
>> iqn.1994-05.com.redhat:15dbed23be9e-ovirt1
>> create
>> iqn.1994-05.com.redhat:a7c1ec3c43f7-ovirt2
>> create
>> iqn.1994-05.com.redhat:67669afedddf-ovirt3
>> create
>> iqn.1994-05.com.redhat:2af344ba6ae5-ceph-admin-test
>> create
> iqn.1998-01.com.vmware:localhost-6af62e4c
>> create
> iqn.1998-01.com.vmware:localhost-0f904dfd
>> cd ..
>> set attribute generate_node_acls=1
>> cd luns
>> create /backstores/block/vmware_5t
>>
>>
>>
>>
>> On Thu, Jan 4, 2018 at 10:55 AM, Joshua Chen
>> <cschen@xxxxxxxxxxxxxxxxxxx
>> <mailto:cschen@asiaa.sinica.edu.tw >> wrote:
>>
>> I had the same problem before, mine is CentOS, and
> when
>> I created
>> /iscsi/create iqn_bla-bla
>> it goes
>> ocal LIO instance already has LIO configured with a
>> target - unable to continue
>>
>>
>>
>> then finally the solution happened to be, turn off
>> target service
>>
>> systemctl stop target
>> systemctl disable target
>>
>>
>> somehow they are doing the same thing, you need to
>> disable 'target' service (targetcli) in order to allow
>> gwcli (rbd-target-api) do it's job.
>>
>> Cheers
>> Joshua
>>
>> On Thu, Jan 4, 2018 at 2:39 AM, Mike Christie
>> <mchristi@xxxxxxxxxx
> <mailto:mchristi@xxxxxxxxxx>> wrote:
>>
>> On 12/25/2017 03:13 PM, Joshua Chen wrote:
>> > Hello folks,
>> > I am trying to share my ceph rbd images
> through
>> iscsi protocol.
>> >
>> > I am trying iscsi-gateway
>> >
>>
> http://docs.ceph.com/docs/master/rbd/iscsi-overview/
>>
> <http://docs.ceph.com/docs/master/rbd/iscsi-overview/ >
>> >
>> >
>> > now
>> >
>> > systemctl start rbd-target-api
>> > is working and I could run gwcli
>> > (at a CentOS 7.4 osd node)
>> >
>> > gwcli
>> > /> ls
>> > o- /
>> >
>>
> ............................................................ ............
> .................................................
>> > [...]
>> > o- clusters
>> >
>>
> ............................................................ ............
> ................................
>> > [Clusters: 1]
>> > | o- ceph
>> >
>>
> ............................................................ ............
> ....................................
>> > [HEALTH_OK]
>> > | o- pools
>> >
>>
> ............................................................ ............
> ..................................
>> > [Pools: 1]
>> > | | o- rbd
>> >
>>
> ............................................................ ............
> ...
>> > [(x3), Commit: 0b/25.9T (0%), Used: 395M]
>> > | o- topology
>> >
>>
> ............................................................ ............
> ........................
>> > [OSDs: 9,MONs: 3]
>> > o- disks
>> >
>>
> ............................................................ ............
> ..................................
>> > [0b, Disks: 0]
>> > o- iscsi-target
>> >
>>
> ............................................................ ............
> .............................
>> > [Targets: 0]
>> >
>> >
>> > but when I created iscsi-target, I got
>> >
>> > Local LIO instance already has LIO configured
> with
>> a target - unable to
>> > continue
>> >
>> >
>> > /> /iscsi-target create
>> >
>>
> iqn.2003-01.org.linux-iscsi.ceph-node1.x8664:sn. 571e1ab51af2
>> > Local LIO instance already has LIO configured
> with
>> a target - unable to
>> > continue
>> > />
>> >
>>
>>
>> Could you send the output of
>>
>> targetcli ls
>>
>> ?
>>
>> What distro are you using?
>>
>> You might just have a target setup from a non
> gwcli
>> source. Maybe from
>> the distro targetcli systemd tools.
>>
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
> <mailto:ceph-users@xxxxxxxxxx.com >
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
>> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. >com
>>
>>
>>
>>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com