Re: iscsi target lun error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 15/11/2022 23:44, Randy Morgan wrote:
You are correct I am using the cephadm to create the iscsi portals. The cluster had been one I was learning a lot with and I wondered if it was because of the number of creations and deletions of things, so I rebuilt the cluster, now I am getting this response even when creating my first iscsi target.   Here is the output of the gwcli ls:

sh-4.4# gwcli ls
o- / ........................................................................................................................ [...]   o- cluster ........................................................................................................ [Clusters: 1]   | o- ceph ......................................................................................................... [HEALTH_WARN]   |   o- pools ......................................................................................................... [Pools: 8]   |   | o- .rgw.root ............................................................ [(x3), Commit: 0.00Y/71588776M (0%), Used: 1323b]   |   | o- cephfs_data .......................................................... [(x3), Commit: 0.00Y/71588776M (0%), Used: 1639b]   |   | o- cephfs_metadata ...................................................... [(x3), Commit: 0.00Y/71588776M (0%), Used: 3434b]   |   | o- default.rgw.control .................................................. [(x3), Commit: 0.00Y/71588776M (0%), Used: 0.00Y]   |   | o- default.rgw.log ...................................................... [(x3), Commit: 0.00Y/71588776M (0%), Used: 3702b]   |   | o- default.rgw.meta ...................................................... [(x3), Commit: 0.00Y/71588776M (0%), Used: 382b]   |   | o- device_health_metrics ................................................ [(x3), Commit: 0.00Y/71588776M (0%), Used: 0.00Y]   |   | o- rhv-ceph-ssd ..................................................... [(x3), Commit: 0.00Y/7868560896K (0%), Used: 511746b]   |   o- topology .............................................................................................. [OSDs: 36,MONs: 3]   o- disks ...................................................................................................... [0.00Y, Disks: 0]   o- iscsi-targets .............................................................................. [DiscoveryAuth: None, Targets: 1]     o- iqn.2001-07.com.ceph:1668466555428 ............................................................... [Auth: None, Gateways: 1]       o- disks ......................................................................................................... [Disks: 0]       o- gateways ........................................................................................... [Up: 1/1, Portals: 1]       | o- host.containers.internal ........................................................................ [192.168.105.145 (UP)]

Please manually remove this gateway before doing further steps.

It should be a bug in cephadm and you can raise one tracker for this.

Thanks


o- host-groups ................................................................................................. [Groups : 0]       o- hosts ...................................................................................... [Auth: ACL_ENABLED, Hosts: 0]
sh-4.4#

Randy

On 11/9/2022 6:36 PM, Xiubo Li wrote:

On 10/11/2022 02:21, Randy Morgan wrote:
I am trying to create a second iscsi target and I keep getting an error when I create the second target:


           Failed to update target 'iqn.2001-07.com.ceph:1667946365517'

disk create/update failed on host.containers.internal. LUN allocation failure

I think you were using the cephadm to add the iscsi targets, not the gwcli or Rest APIs directly.

Before we hit other issues were login failures, that because there were two gateways using the same IP address. Please share your `gwcli ls` output to see what the 'host.containers.internal' gateway's config.

Thanks!


I am running ceph Pacific: *Version*
16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503)
pacific (stable)

All of the information I can find on this problem is from 3 years ago and doesn't seem to apply any more.  Does anyone know how to correct this problem?

Randy




_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux