Clicking on the link provided , I get this
../ SRPMS/ 22-Jan-2018 13:51 - aarch64/ 22-Jan-2018 13:51 - noarch/ 22-Jan-2018 13:51 - x86_64/ 22-Jan-2018 13:51
Inside of the x86_64/repodata from above , there is this
../ 2f241a8387cf35372fd709be4ef6ec83b8a00cc744bb90f..> 22-Jan-2018 13:51 573 401dc19bda88c82c403423fb835844d64345f7e95f5b983..> 22-Jan-2018 13:51 123 6bf9672d0862e8ef8b8ff05a2fd0208a922b1f5978e6589..> 22-Jan-2018 13:51 123 99a710d0cadf6e62be6c49c455ac355f1c65c1740da1bd8..> 22-Jan-2018 13:51 593 dabe2ce5481d23de1f4f52bdcfee0f9af98316c9e0de2ce..> 22-Jan-2018 13:51 134 fd013707e27f8251b52dc4b6aea4c07c81c0b06cff0c0c2..> 22-Jan-2018 13:51 1156 repomd.xml 22-Jan-2018 13:51 2962
How would i use them ?
Steven
On 22 January 2018 at 09:34, Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
Which URL isn't working for you? You should follow the links in a web
browser, select the most recent build, and then click the "Repo URL"
button to get the URL to provide yum.
--
On Mon, Jan 22, 2018 at 9:30 AM, Steven Vacaroaia <stef97@xxxxxxxxx> wrote:
>
> Thanks again for your prompt response
>
> My apologies for wasting your time with trivial question
> but
> the repo provided does not contain rpms but a bunch of compressed files (
> like
> 2f241a8387cf35372fd709be4ef6ec83b8a00cc744bb90f31d82bb27bdd8 0531-other.sqlite.bz2
> )
> and a repomd.xml
>
> How/what exactly should I download/install ?
>
> Steven
>
> On 22 January 2018 at 09:18, Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
>>
>> You can use these repos [1][2][3]
>>
>> [1] https://shaman.ceph.com/repos/python-rtslib/master/
>> [2] https://shaman.ceph.com/repos/ceph-iscsi-config/master/
>> [3] https://shaman.ceph.com/repos/ceph-iscsi-cli/master/
>>
>> targetcli isn't used for iSCSI over RBD (gwcli from ceph-iscsi-cli
>> replaces it), so you hopefully shouldn't need to update it.
>>
>> On Mon, Jan 22, 2018 at 9:06 AM, Steven Vacaroaia <stef97@xxxxxxxxx>
>> wrote:
>> > Excellent news
>> > Many thanks for all your efforts
>> >
>> > If you do not mind, please confirm the following steps ( for centos 7,
>> > kernel version 3.10.0-693.11.6.el7.x86_64)
>> >
>> > - download and Install the RPMs from x86_64 repositories you provided
>> > - do a git clone and, if new version available "pip install . --upgrade"
>> > for
>> > ceph-iscsi-cli
>> > ceph-iscsi-config
>> > rtslib-fb
>> > targetcli-fb
>> > - reboot
>> >
>> > Steven
>> >
>> >
>> > On 22 January 2018 at 08:53, Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
>> >>
>> >> The v4.13-based kernel with the necessary bug fixes and TCMU changes
>> >> is available here [1] and tcmu-runner v1.3.0 is available here [2].
>> >>
>> >> [1] https://shaman.ceph.com/repos/kernel/ceph-iscsi-test/
>> >> [2] https://shaman.ceph.com/repos/tcmu-runner/master/
>> >>
>> >> On Sat, Jan 20, 2018 at 7:33 AM, Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx>
>> >> wrote:
>> >> >
>> >> >
>> >> > Sorry for me asking maybe the obvious but is this the kernel
>> >> > available
>> >> > in elrepo? Or a different one?
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > -----Original Message-----
>> >> > From: Mike Christie [mailto:mchristi@xxxxxxxxxx]
>> >> > Sent: zaterdag 20 januari 2018 1:19
>> >> > To: Steven Vacaroaia; Joshua Chen
>> >> > Cc: ceph-users
>> >> > Subject: Re: iSCSI over RBD
>> >> >
>> >> > On 01/19/2018 02:12 PM, Steven Vacaroaia wrote:
>> >> >> Hi Joshua,
>> >> >>
>> >> >> I was under the impression that kernel 3.10.0-693 will work with
>> >> > iscsi
>> >> >>
>> >> >
>> >> > That kernel works with RHCS 2.5 and below. You need the rpms from
>> >> > that
>> >> > or the matching upstream releases. Besides trying to dig out the
>> >> > versions and matching things up, the problem with those releases is
>> >> > that
>> >> > they were tech previewed or only supports linux initiators.
>> >> >
>> >> > It looks like you are using the newer upstream tools or RHCS 3.0
>> >> > tools.
>> >> > For them you need the RHEL 7.5 beta or newer kernel or an upstream
>> >> > one.
>> >> > For upstream all the patches got merged into the target layer
>> >> > maintainer's tree yesterday. A new tcmu-runner release has been made.
>> >> > And I just pushed a test kernel with all the patches based on 4.13
>> >> > (4.14
>> >> > had a bug in the login code which is being fixed still) to github, so
>> >> > people do not have to wait for the next-next kernel release to come
>> >> > out.
>> >> >
>> >> > Just give us a couple days for the kernel build to be done, to make
>> >> > the
>> >> > needed ceph-iscsi-* release (current version will fail to create rbd
>> >> > images with the current tcmu-runner release) and get the
>> >> > documentation
>> >> > updated because some links are incorrect and some version info needs
>> >> > to
>> >> > be updated.
>> >> >
>> >> >
>> >> >> Unfortunately I still cannot create a disk because qfull_time_out
>> >> >> is
>> >> >> not supported
>> >> >>
>> >> >> What am I missing / do it wrong ?
>> >> >>
>> >> >> 2018-01-19 15:06:45,216 INFO [lun.py:601:add_dev_to_lio()] -
>> >> >> (LUN.add_dev_to_lio) Adding image 'rbd.disk2' to LIO
>> >> >> 2018-01-19 15:06:45,295 ERROR [lun.py:634:add_dev_to_lio()] -
>> >> >> Could
>> >> >> not set LIO device attribute cmd_time_out/qfull_time_out for device:
>> >> >> rbd.disk2. Kernel not supported. - error(Cannot find attribute:
>> >> >> qfull_time_out)
>> >> >> 2018-01-19 15:06:45,300 ERROR [rbd-target-api:731:_disk()] - LUN
>> >> >> alloc problem - Could not set LIO device attribute
>> >> >> cmd_time_out/qfull_time_out for device: rbd.disk2. Kernel not
>> >> > supported.
>> >> >> - error(Cannot find attribute: qfull_time_out)
>> >> >>
>> >> >>
>> >> >> Many thanks
>> >> >>
>> >> >> Steven
>> >> >>
>> >> >> On 4 January 2018 at 22:40, Joshua Chen <cschen@xxxxxxxxxxxxxxxxxxx
>> >> >> <mailto:cschen@asiaa.sinica.edu.tw >> wrote:
>> >> >>
>> >> >> Hello Steven,
>> >> >> I am using CentOS 7.4.1708 with kernel 3.10.0-693.el7.x86_64
>> >> >> and the following packages:
>> >> >>
>> >> >> ceph-iscsi-cli-2.5-9.el7.centos.noarch.rpm
>> >> >> ceph-iscsi-config-2.3-12.el7.centos.noarch.rpm
>> >> >> libtcmu-1.3.0-0.4.el7.centos.x86_64.rpm
>> >> >> libtcmu-devel-1.3.0-0.4.el7.centos.x86_64.rpm
>> >> >> python-rtslib-2.1.fb64-2.el7.centos.noarch.rpm
>> >> >> python-rtslib-doc-2.1.fb64-2.el7.centos.noarch.rpm
>> >> >> targetcli-2.1.fb47-0.1.20170815.git5bf3517.el7. centos.noarch.rpm
>> >> >> tcmu-runner-1.3.0-0.4.el7.centos.x86_64.rpm
>> >> >> tcmu-runner-debuginfo-1.3.0-0.4.el7.centos.x86_64.rpm
>> >> >>
>> >> >>
>> >> >> Cheers
>> >> >> Joshua
>> >> >>
>> >> >>
>> >> >> On Fri, Jan 5, 2018 at 2:14 AM, Steven Vacaroaia
>> >> >> <stef97@xxxxxxxxx
>> >> >> <mailto:stef97@xxxxxxxxx>> wrote:
>> >> >>
>> >> >> Hi Joshua,
>> >> >>
>> >> >> How did you manage to use iSCSI gateway ?
>> >> >> I would like to do that but still waiting for a patched
>> >> >> kernel
>> >> >
>> >> >>
>> >> >> What kernel/OS did you use and/or how did you patch it ?
>> >> >>
>> >> >> Tahnsk
>> >> >> Steven
>> >> >>
>> >> >> On 4 January 2018 at 04:50, Joshua Chen
>> >> >> <cschen@xxxxxxxxxxxxxxxxxxx
>> >> > <mailto:cschen@asiaa.sinica.edu.tw >>
>> >> >> wrote:
>> >> >>
>> >> >> Dear all,
>> >> >> Although I managed to run gwcli and created some iqns,
>> >> > or
>> >> >> luns,
>> >> >> but I do need some working config example so that my
>> >> >> initiator could connect and get the lun.
>> >> >>
>> >> >> I am familiar with targetcli and I used to do the
>> >> >> following ACL style connection rather than password,
>> >> >> the targetcli setting tree is here:
>> >> >>
>> >> >> (or see this page
>> >> >> <http://www.asiaa.sinica.edu.tw/~cschen/targetcli.html >)
>> >> >>
>> >> >> #targetcli ls
>> >> >> o- /
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > .................................................
>> >> >> [...]
>> >> >> o- backstores
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > ......................................
>> >> >> [...]
>> >> >> | o- block
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > ..........................
>> >> >> [Storage Objects: 1]
>> >> >> | | o- vmware_5t
>> >> >>
>> >> >> ..........................................................
>> >> >> [/dev/rbd/rbd/vmware_5t (5.0TiB) write-thru activated]
>> >> >> | | o- alua
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > ...........................
>> >> >> [ALUA Groups: 1]
>> >> >> | | o- default_tg_pt_gp
>> >> >>
>> >> >
>> >> > ............................................................ ...........
>> >> >> [ALUA state: Active/optimized]
>> >> >> | o- fileio
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > .........................
>> >> >> [Storage Objects: 0]
>> >> >> | o- pscsi
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > ..........................
>> >> >> [Storage Objects: 0]
>> >> >> | o- ramdisk
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > ........................
>> >> >> [Storage Objects: 0]
>> >> >> | o- user:rbd
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > .......................
>> >> >> [Storage Objects: 0]
>> >> >> o- iscsi
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > ....................................
>> >> >> [Targets: 1]
>> >> >> | o- iqn.2017-12.asiaa.cephosd1:vmware5t
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > .......
>> >> >> [TPGs: 1]
>> >> >> | o- tpg1
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > ..........................
>> >> >> [gen-acls, no-auth]
>> >> >> | o- acls
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > .................................
>> >> >> [ACLs: 12]
>> >> >> | | o- iqn.1994-05.com.redhat:15dbed23be9e
>> >> >>
>> >> > ............................................................ ......
>> >> >> [Mapped LUNs: 1]
>> >> >> | | | o- mapped_lun0
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > .....
>> >> >> [lun0 block/vmware_5t (rw)]
>> >> >> | | o- iqn.1994-05.com.redhat:15dbed23be9e-ovirt1
>> >> >>
>> >> > ...........................................................
>> >> >> [Mapped LUNs: 1]
>> >> >> | | | o- mapped_lun0
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > .....
>> >> >> [lun0 block/vmware_5t (rw)]
>> >> >> | | o-
>> >> >> iqn.1994-05.com.redhat:2af344ba6ae5-ceph-admin-test
>> >> >> ..................................................
>> >> >> [Mapped
>> >> >> LUNs: 1]
>> >> >> | | | o- mapped_lun0
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > .....
>> >> >> [lun0 block/vmware_5t (rw)]
>> >> >> | | o- iqn.1994-05.com.redhat:67669afedddf
>> >> >>
>> >> > ............................................................ ......
>> >> >> [Mapped LUNs: 1]
>> >> >> | | | o- mapped_lun0
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > .....
>> >> >> [lun0 block/vmware_5t (rw)]
>> >> >> | | o- iqn.1994-05.com.redhat:67669afedddf-ovirt3
>> >> >>
>> >> > ...........................................................
>> >> >> [Mapped LUNs: 1]
>> >> >> | | | o- mapped_lun0
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > .....
>> >> >> [lun0 block/vmware_5t (rw)]
>> >> >> | | o- iqn.1994-05.com.redhat:a7c1ec3c43f7
>> >> >>
>> >> > ............................................................ ......
>> >> >> [Mapped LUNs: 1]
>> >> >> | | | o- mapped_lun0
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > .....
>> >> >> [lun0 block/vmware_5t (rw)]
>> >> >> | | o- iqn.1994-05.com.redhat:a7c1ec3c43f7-ovirt2
>> >> >>
>> >> > ...........................................................
>> >> >> [Mapped LUNs: 1]
>> >> >> | | | o- mapped_lun0
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > .....
>> >> >> [lun0 block/vmware_5t (rw)]
>> >> >> | | o-
>> >> > iqn.1994-05.com.redhat:b01662ec2129-ceph-node2
>> >> >> .......................................................
>> >> >> [Mapped LUNs: 1]
>> >> >> | | | o- mapped_lun0
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > .....
>> >> >> [lun0 block/vmware_5t (rw)]
>> >> >> | | o-
>> >> > iqn.1994-05.com.redhat:d46b42a1915b-ceph-node3
>> >> >> .......................................................
>> >> >> [Mapped LUNs: 1]
>> >> >> | | | o- mapped_lun0
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > .....
>> >> >> [lun0 block/vmware_5t (rw)]
>> >> >> | | o-
>> >> > iqn.1994-05.com.redhat:e7692a10f661-ceph-node1
>> >> >> .......................................................
>> >> >> [Mapped LUNs: 1]
>> >> >> | | | o- mapped_lun0
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > .....
>> >> >> [lun0 block/vmware_5t (rw)]
>> >> >> | | o- iqn.1998-01.com.vmware:localhost-0f904dfd
>> >> >>
>> >> > ............................................................
>> >> >> [Mapped LUNs: 1]
>> >> >> | | | o- mapped_lun0
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > .....
>> >> >> [lun0 block/vmware_5t (rw)]
>> >> >> | | o- iqn.1998-01.com.vmware:localhost-6af62e4c
>> >> >>
>> >> > ............................................................
>> >> >> [Mapped LUNs: 1]
>> >> >> | | o- mapped_lun0
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > .....
>> >> >> [lun0 block/vmware_5t (rw)]
>> >> >> | o- luns
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > ..................................
>> >> >> [LUNs: 1]
>> >> >> | | o- lun0
>> >> >> ....................................................
>> >> >> [block/vmware_5t (/dev/rbd/rbd/vmware_5t)
>> >> > (default_tg_pt_gp)]
>> >> >> | o- portals
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > ............................
>> >> >> [Portals: 1]
>> >> >> | o- 172.20.0.12:3260 <http://172.20.0.12:3260>
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > .........................
>> >> >> [OK]
>> >> >> o- loopback
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > .................................
>> >> >> [Targets: 0]
>> >> >> o- xen_pvscsi
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > ...............................
>> >> >> [Targets: 0]
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >> My targetcli setup procedure is like this, could someone
>> >> >> translate it to gwcli equivalent procedure?
>> >> >> sorry for asking for this due to lack of documentation
>> >> >> and
>> >> >> examples.
>> >> >> thanks in adavance
>> >> >>
>> >> >> Cheers
>> >> >> Joshua
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >> targetcli /backstores/block create name=vmware_5t
>> >> >> dev=/dev/rbd/rbd/vmware_5t
>> >> >> targetcli /iscsi/ create
>> >> > iqn.2017-12.asiaa.cephosd1:vmware5t
>> >> >> targetcli
>> >> >> /iscsi/iqn.2017-12.asiaa.cephosd1:vmware5t/tpg1/portals
>> >> >> delete ip_address=0.0.0.0 ip_port=3260
>> >> >>
>> >> >> targetcli
>> >> >> cd /iscsi/iqn.2017-12.asiaa.cephosd1:vmware5t/tpg1
>> >> >> portals/ create 172.20.0.12
>> >> >> acls/
>> >> >> create
>> >> >> iqn.1994-05.com.redhat:e7692a10f661-ceph-node1
>> >> >> create
>> >> >> iqn.1994-05.com.redhat:b01662ec2129-ceph-node2
>> >> >> create
>> >> >> iqn.1994-05.com.redhat:d46b42a1915b-ceph-node3
>> >> >> create
>> >> >> iqn.1994-05.com.redhat:15dbed23be9e
>> >> >> create
>> >> >> iqn.1994-05.com.redhat:a7c1ec3c43f7
>> >> >> create
>> >> >> iqn.1994-05.com.redhat:67669afedddf
>> >> >> create
>> >> >> iqn.1994-05.com.redhat:15dbed23be9e-ovirt1
>> >> >> create
>> >> >> iqn.1994-05.com.redhat:a7c1ec3c43f7-ovirt2
>> >> >> create
>> >> >> iqn.1994-05.com.redhat:67669afedddf-ovirt3
>> >> >> create
>> >> >> iqn.1994-05.com.redhat:2af344ba6ae5-ceph-admin-test
>> >> >> create
>> >> > iqn.1998-01.com.vmware:localhost-6af62e4c
>> >> >> create
>> >> > iqn.1998-01.com.vmware:localhost-0f904dfd
>> >> >> cd ..
>> >> >> set attribute generate_node_acls=1
>> >> >> cd luns
>> >> >> create /backstores/block/vmware_5t
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >> On Thu, Jan 4, 2018 at 10:55 AM, Joshua Chen
>> >> >> <cschen@xxxxxxxxxxxxxxxxxxx
>> >> >> <mailto:cschen@asiaa.sinica.edu.tw >> wrote:
>> >> >>
>> >> >> I had the same problem before, mine is CentOS, and
>> >> > when
>> >> >> I created
>> >> >> /iscsi/create iqn_bla-bla
>> >> >> it goes
>> >> >> ocal LIO instance already has LIO configured with a
>> >> >> target - unable to continue
>> >> >>
>> >> >>
>> >> >>
>> >> >> then finally the solution happened to be, turn off
>> >> >> target service
>> >> >>
>> >> >> systemctl stop target
>> >> >> systemctl disable target
>> >> >>
>> >> >>
>> >> >> somehow they are doing the same thing, you need to
>> >> >> disable 'target' service (targetcli) in order to
>> >> >> allow
>> >> >> gwcli (rbd-target-api) do it's job.
>> >> >>
>> >> >> Cheers
>> >> >> Joshua
>> >> >>
>> >> >> On Thu, Jan 4, 2018 at 2:39 AM, Mike Christie
>> >> >> <mchristi@xxxxxxxxxx
>> >> > <mailto:mchristi@xxxxxxxxxx>> wrote:
>> >> >>
>> >> >> On 12/25/2017 03:13 PM, Joshua Chen wrote:
>> >> >> > Hello folks,
>> >> >> > I am trying to share my ceph rbd images
>> >> > through
>> >> >> iscsi protocol.
>> >> >> >
>> >> >> > I am trying iscsi-gateway
>> >> >> >
>> >> >>
>> >> > http://docs.ceph.com/docs/master/rbd/iscsi-overview/
>> >> >>
>> >> > <http://docs.ceph.com/docs/master/rbd/iscsi-overview/ >
>> >> >> >
>> >> >> >
>> >> >> > now
>> >> >> >
>> >> >> > systemctl start rbd-target-api
>> >> >> > is working and I could run gwcli
>> >> >> > (at a CentOS 7.4 osd node)
>> >> >> >
>> >> >> > gwcli
>> >> >> > /> ls
>> >> >> > o- /
>> >> >> >
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > .................................................
>> >> >> > [...]
>> >> >> > o- clusters
>> >> >> >
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > ................................
>> >> >> > [Clusters: 1]
>> >> >> > | o- ceph
>> >> >> >
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > ....................................
>> >> >> > [HEALTH_OK]
>> >> >> > | o- pools
>> >> >> >
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > ..................................
>> >> >> > [Pools: 1]
>> >> >> > | | o- rbd
>> >> >> >
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > ...
>> >> >> > [(x3), Commit: 0b/25.9T (0%), Used: 395M]
>> >> >> > | o- topology
>> >> >> >
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > ........................
>> >> >> > [OSDs: 9,MONs: 3]
>> >> >> > o- disks
>> >> >> >
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > ..................................
>> >> >> > [0b, Disks: 0]
>> >> >> > o- iscsi-target
>> >> >> >
>> >> >>
>> >> >
>> >> > ............................................................ ............
>> >> > .............................
>> >> >> > [Targets: 0]
>> >> >> >
>> >> >> >
>> >> >> > but when I created iscsi-target, I got
>> >> >> >
>> >> >> > Local LIO instance already has LIO configured
>> >> > with
>> >> >> a target - unable to
>> >> >> > continue
>> >> >> >
>> >> >> >
>> >> >> > /> /iscsi-target create
>> >> >> >
>> >> >>
>> >> > iqn.2003-01.org.linux-iscsi.ceph-node1.x8664:sn. 571e1ab51af2
>> >> >> > Local LIO instance already has LIO configured
>> >> > with
>> >> >> a target - unable to
>> >> >> > continue
>> >> >> > />
>> >> >> >
>> >> >>
>> >> >>
>> >> >> Could you send the output of
>> >> >>
>> >> >> targetcli ls
>> >> >>
>> >> >> ?
>> >> >>
>> >> >> What distro are you using?
>> >> >>
>> >> >> You might just have a target setup from a non
>> >> > gwcli
>> >> >> source. Maybe from
>> >> >> the distro targetcli systemd tools.
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >> _______________________________________________
>> >> >> ceph-users mailing list
>> >> >> ceph-users@xxxxxxxxxxxxxx
>> >> > <mailto:ceph-users@xxxxxxxxxx.com >
>> >> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
>> >> >> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. >com
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >
>> >> > _______________________________________________
>> >> > ceph-users mailing list
>> >> > ceph-users@xxxxxxxxxxxxxx
>> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
>> >> >
>> >> >
>> >> > _______________________________________________
>> >> > ceph-users mailing list
>> >> > ceph-users@xxxxxxxxxxxxxx
>> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
>> >>
>> >>
>> >>
>> >> --
>> >> Jason
>> >> _______________________________________________
>> >> ceph-users mailing list
>> >> ceph-users@xxxxxxxxxxxxxx
>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
>> >
>> >
>>
>>
>>
>> --
>> Jason
>
>
Jason
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com