Re: iscsi target lun error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Xiubo, Randy,

This is due to '<host_ip_address> host.containers.internal' being added to the container's /etc/hosts since Podman 4.1+.

The workaround consists of either downgrading Podman package to v4.0 (on RHEL8, dnf downgrade podman-4.0.2-6.module+el8.6.0+14877+f643d2d6) or adding the --no-hosts option to 'podman run' command in /var/lib/ceph/$(ceph fsid)/iscsi.iscsi.test-iscsi1.xxxxxx/unit.run and restart the iscsi container service.

[1] and [2] could well have the same cause. RHCS Block Device Guide [3] quotes RHEL 8.4 as a prerequisites. I don't know what was the version of Podman in RHEL 8.4 at the time, but with RHEL 8.7 and Podman 4.2, it's broken.

I'll open a RHCS case today to have it fixed and have other containers like grafana, prometheus, etc. being checked against this new podman behavior.

Regards,
Frédéric.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1979449
[2] https://tracker.ceph.com/issues/57018
[3] https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html-single/block_device_guide/index#prerequisites_9

----- Le 21 Nov 22, à 6:45, Xiubo Li xiubli@xxxxxxxxxx a écrit :

> On 15/11/2022 23:44, Randy Morgan wrote:
>> You are correct I am using the cephadm to create the iscsi portals.
>> The cluster had been one I was learning a lot with and I wondered if
>> it was because of the number of creations and deletions of things, so
>> I rebuilt the cluster, now I am getting this response even when
>> creating my first iscsi target.   Here is the output of the gwcli ls:
>>
>> sh-4.4# gwcli ls
>> o- /
>> ........................................................................................................................
>> [...]
>>   o- cluster
>> ........................................................................................................
>> [Clusters: 1]
>>   | o- ceph
>> .........................................................................................................
>> [HEALTH_WARN]
>>   |   o- pools
>> .........................................................................................................
>> [Pools: 8]
>>   |   | o- .rgw.root
>> ............................................................ [(x3),
>> Commit: 0.00Y/71588776M (0%), Used: 1323b]
>>   |   | o- cephfs_data
>> .......................................................... [(x3),
>> Commit: 0.00Y/71588776M (0%), Used: 1639b]
>>   |   | o- cephfs_metadata
>> ...................................................... [(x3), Commit:
>> 0.00Y/71588776M (0%), Used: 3434b]
>>   |   | o- default.rgw.control
>> .................................................. [(x3), Commit:
>> 0.00Y/71588776M (0%), Used: 0.00Y]
>>   |   | o- default.rgw.log
>> ...................................................... [(x3), Commit:
>> 0.00Y/71588776M (0%), Used: 3702b]
>>   |   | o- default.rgw.meta
>> ...................................................... [(x3), Commit:
>> 0.00Y/71588776M (0%), Used: 382b]
>>   |   | o- device_health_metrics
>> ................................................ [(x3), Commit:
>> 0.00Y/71588776M (0%), Used: 0.00Y]
>>   |   | o- rhv-ceph-ssd
>> ..................................................... [(x3), Commit:
>> 0.00Y/7868560896K (0%), Used: 511746b]
>>   |   o- topology
>> ..............................................................................................
>> [OSDs: 36,MONs: 3]
>>   o- disks
>> ......................................................................................................
>> [0.00Y, Disks: 0]
>>   o- iscsi-targets
>> ..............................................................................
>> [DiscoveryAuth: None, Targets: 1]
>>     o- iqn.2001-07.com.ceph:1668466555428
>> ............................................................... [Auth:
>> None, Gateways: 1]
>>       o- disks
>> .........................................................................................................
>> [Disks: 0]
>>       o- gateways
>> ...........................................................................................
>> [Up: 1/1, Portals: 1]
>>       | o- host.containers.internal
>> ........................................................................
>> [192.168.105.145 (UP)]
> 
> Please manually remove this gateway before doing further steps.
> 
> It should be a bug in cephadm and you can raise one tracker for this.
> 
> Thanks
> 
> 
>> o- host-groups
>> .................................................................................................
>> [Groups : 0]
>>       o- hosts
>> ......................................................................................
>> [Auth: ACL_ENABLED, Hosts: 0]
>> sh-4.4#
>>
>> Randy
>>
>> On 11/9/2022 6:36 PM, Xiubo Li wrote:
>>>
>>> On 10/11/2022 02:21, Randy Morgan wrote:
>>>> I am trying to create a second iscsi target and I keep getting an
>>>> error when I create the second target:
>>>>
>>>>
>>>>            Failed to update target 'iqn.2001-07.com.ceph:1667946365517'
>>>>
>>>> disk create/update failed on host.containers.internal. LUN
>>>> allocation failure
>>>>
>>> I think you were using the cephadm to add the iscsi targets, not the
>>> gwcli or Rest APIs directly.
>>>
>>> Before we hit other issues were login failures, that because there
>>> were two gateways using the same IP address. Please share your `gwcli
>>> ls` output to see what the 'host.containers.internal' gateway's config.
>>>
>>> Thanks!
>>>
>>>
>>>> I am running ceph Pacific: *Version*
>>>> 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503)
>>>> pacific (stable)
>>>>
>>>> All of the information I can find on this problem is from 3 years
>>>> ago and doesn't seem to apply any more.  Does anyone know how to
>>>> correct this problem?
>>>>
>>>> Randy
>>>>
>>>
>>
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux