Re: add existing rbd to new tcmu iscsi gateways

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Looks like that may have recently been broken.

Unfortunately no real logs of use in rbd-target-api.log or rbd-target-gw.log. Is there an increased log level I can enable for whatever web-service is handling this?

[root@dc1srviscsi01 ~]# rbd -p vmware_ssd_metadata --data-pool vmware_ssd --size 2T create ssd_test_0

[root@dc1srviscsi01 ~]# rbd -p vmware_ssd_metadata info ssd_test_0
rbd image 'ssd_test_0':
        size 2 TiB in 524288 objects
        order 22 (4 MiB objects)
        id: b4343f6b8b4567
        data_pool: vmware_ssd
        block_name_prefix: rbd_data.56.b4343f6b8b4567
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten, data-pool
        op_features: 
        flags: 
        create_timestamp: Wed Oct 10 16:36:18 2018

[root@dc1srviscsi01 ~]# gwcli 
/> cd disks/
/disks> create pool=vmware_ssd_metadata image=vmware_ssd_metadata.ssd_test_0 size=2T
Failed : 500 INTERNAL SERVER ERROR
/disks> 


[root@dc1srviscsi02 ~]# rpm -qa |egrep "ceph|iscsi|tcmu|rst|kernel-ml"
ceph-mds-13.2.2-0.el7.x86_64
ceph-mgr-13.2.2-0.el7.x86_64
ceph-release-1-1.el7.noarch
tcmu-runner-1.4.0-1.el7.x86_64
ceph-iscsi-cli-2.7-54.g9b18a3b.el7.noarch
ceph-common-13.2.2-0.el7.x86_64
ceph-mon-13.2.2-0.el7.x86_64
ceph-13.2.2-0.el7.x86_64
tcmu-runner-debuginfo-1.4.0-1.el7.x86_64
ceph-iscsi-config-2.6-42.gccca57d.el7.noarch
libcephfs2-13.2.2-0.el7.x86_64
ceph-base-13.2.2-0.el7.x86_64
ceph-osd-13.2.2-0.el7.x86_64
ceph-radosgw-13.2.2-0.el7.x86_64
kernel-ml-4.18.12-1.el7.elrepo.x86_64
python-cephfs-13.2.2-0.el7.x86_64
ceph-selinux-13.2.2-0.el7.x86_64



On Tue, Oct 9, 2018 at 3:51 PM Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
On Tue, Oct 9, 2018 at 3:14 PM Brady Deetz <bdeetz@xxxxxxxxx> wrote:
>
> I am attempting to migrate to the new tcmu iscsi gateway. Is there a way to configure gwcli to export an rbd that was created outside gwcli?

You should be able to just run "/disks create <pool>.<image name>
<size>" from within "gwcli" to have it add an existing image.

> This is necessary for me because I have a lun exported from an old LIO gateway to a Windows host that I need to transition to the new tcmu based cluster.
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux