Re: lun allocation failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On a Pacific cluster I have the same error message:

---snip---
023-08-25T11:56:47.222407+02:00 ses7-host1 conmon[1383161]: debug (LUN.add_dev_to_lio) Adding image 'iscsi-pool/image3' to LIO backstore rbd 2023-08-25T11:56:47.714375+02:00 ses7-host1 kernel: [12861743.121824] rbd: rbd1: capacity 5368709120 features 0x3d
2023-08-25T11:56:47.746390+02:00 ses7-host1 conmon[1383161]: /dev/rbd1
2023-08-25T11:56:47.764072+02:00 ses7-host1 conmon[1383161]: debug failed to add iscsi-pool/image3 to LIO - error([Errno 22] Invalid argument) 2023-08-25T11:56:47.764563+02:00 ses7-host1 conmon[1383161]: debug LUN alloc problem - failed to add iscsi-pool/image3 to LIO - error([Errno 22] Invalid argument) 2023-08-25T11:56:47.766292+02:00 ses7-host1 kernel: [12861743.172626] target_core_rbd: RBD: emulate_legacy_capacity must be disabled for RBD_FEATURE_OBJECT_MAP images 2023-08-25T11:56:47.766304+02:00 ses7-host1 kernel: [12861743.172653] target_core_rbd: RBD: emulate_legacy_capacity must be disabled for RBD_FEATURE_OBJECT_MAP images 2023-08-25T11:56:47.770242+02:00 ses7-host1 conmon[1383161]: debug ::ffff:127.0.0.1 - - [25/Aug/2023 09:56:47] "#033[35m#033[1mPUT /api/_disk/iscsi-pool/image3 HTTP/1.1#033[0m" 500 - 2023-08-25T11:56:47.770640+02:00 ses7-host1 conmon[1383161]: ::ffff:127.0.0.1 - - [25/Aug/2023 09:56:47] "#033[35m#033[1mPUT /api/_disk/iscsi-pool/image3 HTTP/1.1#033[0m" 500 - 2023-08-25T11:56:47.772916+02:00 ses7-host1 conmon[1383161]: debug _disk change on localhost failed with 500 2023-08-25T11:56:47.773753+02:00 ses7-host1 conmon[1383161]: debug ::ffff:192.168.168.81 - - [25/Aug/2023 09:56:47] "#033[35m#033[1mPUT /api/disk/iscsi-pool/image3 HTTP/1.1#033[0m" 500 - 2023-08-25T11:56:47.776186+02:00 ses7-host1 conmon[1741104]: iscsi REST API failed PUT req status: 500 2023-08-25T11:56:47.777479+02:00 ses7-host1 conmon[1741104]: Error while calling Task(ns=iscsi/target/edit, md={'target_iqn': 'iqn.2001-07.com.ceph:1692955254223'}) 2023-08-25T11:56:47.777689+02:00 ses7-host1 conmon[1741104]: Traceback (most recent call last): 2023-08-25T11:56:47.777900+02:00 ses7-host1 conmon[1741104]: File "/usr/share/ceph/mgr/dashboard/controllers/iscsi.py", line 789, in _create 2023-08-25T11:56:47.778333+02:00 ses7-host1 conmon[1741104]: controls=controls) 2023-08-25T11:56:47.778725+02:00 ses7-host1 conmon[1741104]: File "/usr/share/ceph/mgr/dashboard/rest_client.py", line 535, in func_wrapper
2023-08-25T11:56:47.778879+02:00 ses7-host1 conmon[1741104]:     **kwargs)
2023-08-25T11:56:47.779103+02:00 ses7-host1 conmon[1741104]: File "/usr/share/ceph/mgr/dashboard/services/iscsi_client.py", line 126, in create_disk
2023-08-25T11:56:47.779264+02:00 ses7-host1 conmon[1741104]:     'wwn': wwn
2023-08-25T11:56:47.779412+02:00 ses7-host1 conmon[1741104]: File "/usr/share/ceph/mgr/dashboard/rest_client.py", line 324, in __call__ 2023-08-25T11:56:47.779549+02:00 ses7-host1 conmon[1741104]: data, raw_content, headers) 2023-08-25T11:56:47.779715+02:00 ses7-host1 conmon[1741104]: File "/usr/share/ceph/mgr/dashboard/rest_client.py", line 453, in do_request
2023-08-25T11:56:47.779861+02:00 ses7-host1 conmon[1741104]:     resp.content)
2023-08-25T11:56:47.780025+02:00 ses7-host1 conmon[1741104]: dashboard.rest_client.RequestException: iscsi REST API failed request with status code 500 2023-08-25T11:56:47.780189+02:00 ses7-host1 conmon[1741104]: (b'{"message":"disk create/update failed on ses7-host1. LUN all' 2023-08-25T11:56:47.780332+02:00 ses7-host1 conmon[1741104]: b'ocation failure"}\n')
---snip---

I'm wondering why on Pacific I have the rbd image mapped while on Reef I don't. But isn't iscsi-gw deprecated [1]?

The iSCSI gateway is in maintenance as of November 2022. This means that it is no longer in active development and will not be updated to add new features.

Not sure if it makes sense to dig deeper here...

[1] https://docs.ceph.com/en/quincy/rbd/iscsi-overview/

Zitat von Eugen Block <eblock@xxxxxx>:

Hi,

that's quite interesting, I tried to reproduce with 18.2.0 but it worked for me. The cluster runs on openSUSE Leap 15.4. There are two things that seem to differ in my attempt.
1. I had to run 'modprobe iscsi_target_mod' to get rid of this error message:

(b'{"message":"iscsi target \'init\' process failed for iqn.2001-07.com.ceph:'
b'1692952282206 - Could not load module: iscsi_target_mod"}\n')

2. I don't have tcmu runner up which is reported every 10 seconds but it seems to work anyway:

debug there is no tcmu-runner data available

In a Pacific test cluster I do see that a tcmu runner is deployed alongside the iscsi gw. Anyway, this is my output:

---snip---
[...]
o- disks ......................................................................................................... [10G, Disks: 1] | o- test-pool ................................................................................................. [test-pool (10G)] | o- image1 .................................................................................. [test-pool/image1 (Unknown, 10G)] o- iscsi-targets ............................................................................... [DiscoveryAuth: None, Targets: 1] o- iqn.2001-07.com.ceph:1692952282206 ................................................................ [Auth: None, Gateways: 1] o- disks .......................................................................................................... [Disks: 1] | o- test-pool/image1 ............................................................. [Owner: reef01, Lun: 0] o- gateways ............................................................................................ [Up: 1/1, Portals: 1] | o- reef01 ........................................................................ [>IP> (UP)] o- host-groups .................................................................................................. [Groups : 0] o- hosts ...................................................................................... [Auth: ACL_DISABLED, Hosts: 0]
---snip---

This is my exact ceph version:

"ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 3


Zitat von Opánszki Gábor <gabor.opanszki@xxxxxxxxxxxxx>:

Hi folks,

we deployed new reef cluster to our lab.

all of the nodes are up and running, but we can't allocate lun to target.

on the gui we got "disk create/update failed on ceph-iscsigw0. LUN allocation failure" message.

We created images on gui

do you have any idea?

Thanks

root@ceph-mgr0:~# ceph -s
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
  cluster:
    id:     ad0aede2-4100-11ee-bc14-1c40244f5c21
    health: HEALTH_OK

  services:
    mon:         5 daemons, quorum ceph-mgr0,ceph-mgr1,ceph-osd5,ceph-osd7,ceph-osd6 (age 28h)     mgr:         ceph-mgr0.sapbav(active, since 45h), standbys: ceph-mgr1.zwzyuc
    osd:         44 osds: 44 up (since 4h), 44 in (since 4h)
    tcmu-runner: 1 portal active (1 hosts)

  data:
    pools:   5 pools, 3074 pgs
    objects: 27 objects, 453 KiB
    usage:   30 GiB used, 101 TiB / 101 TiB avail
    pgs:     3074 active+clean

  io:
    client:   2.7 KiB/s rd, 2 op/s rd, 0 op/s wr

root@ceph-mgr0:~#

root@ceph-mgr0:~# rados lspools
.mgr
ace1
1T-r3-01
ace0
x
root@ceph-mgr0:~# rbd ls 1T-r3-01
111
aaaa
bb
pool2
teszt
root@ceph-mgr0:~# rbd ls x
x-a
root@ceph-mgr0:~#

root@ceph-mgr0:~# rbd info 1T-r3-01/111
rbd image '111':
    size 1 GiB in 256 objects
    order 22 (4 MiB objects)
    snapshot_count: 0
    id: 5f927ce161de
    block_name_prefix: rbd_data.5f927ce161de
    format: 2
    features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
    op_features:
    flags:
    create_timestamp: Thu Aug 24 17:33:37 2023
    access_timestamp: Thu Aug 24 17:33:37 2023
    modify_timestamp: Thu Aug 24 17:33:37 2023
root@ceph-mgr0:~# rbd info 1T-r3-01/aaaa
rbd image 'aaaa':
    size 1 GiB in 256 objects
    order 22 (4 MiB objects)
    snapshot_count: 0
    id: 5f926a0e299f
    block_name_prefix: rbd_data.5f926a0e299f
    format: 2
    features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
    op_features:
    flags:
    create_timestamp: Thu Aug 24 17:18:06 2023
    access_timestamp: Thu Aug 24 17:18:06 2023
    modify_timestamp: Thu Aug 24 17:18:06 2023
root@ceph-mgr0:~# rbd info x/x-a
rbd image 'x-a':
    size 1 GiB in 256 objects
    order 22 (4 MiB objects)
    snapshot_count: 0
    id: 5f922dbdf6c6
    block_name_prefix: rbd_data.5f922dbdf6c6
    format: 2
    features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
    op_features:
    flags:
    create_timestamp: Thu Aug 24 17:48:28 2023
    access_timestamp: Thu Aug 24 17:48:28 2023
    modify_timestamp: Thu Aug 24 17:48:28 2023
root@ceph-mgr0:~#

root@ceph-mgr0:~# ceph orch ls --service_type iscsi
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
NAME        PORTS   RUNNING  REFRESHED  AGE PLACEMENT
iscsi.gw-1  ?:5000      2/2  4m ago     6m ceph-iscsigw0;ceph-iscsigw1
root@ceph-mgr0:~#



GW:


root@ceph-iscsigw0:~# docker ps
CONTAINER ID   IMAGE COMMAND                  CREATED             STATUS PORTS     NAMES d677a8abd2d8   quay.io/ceph/ceph "/usr/bin/rbd-target…"   6 seconds ago       Up 5 seconds ceph-ad0aede2-4100-11ee-bc14-1c40244f5c21-iscsi-gw-1-ceph-iscsigw0-fmuyhi ead503586cdd   quay.io/ceph/ceph "/usr/bin/tcmu-runner"   6 seconds ago       Up 5 seconds ceph-ad0aede2-4100-11ee-bc14-1c40244f5c21-iscsi-gw-1-ceph-iscsigw0-fmuyhi-tcmu 3ae0014bcc41   quay.io/ceph/ceph "/usr/bin/ceph-crash…"   About an hour ago   Up About an hour ceph-ad0aede2-4100-11ee-bc14-1c40244f5c21-crash-ceph-iscsigw0 1a7bc044ed8a   quay.io/ceph/ceph "/usr/bin/ceph-expor…"   About an hour ago   Up About an hour ceph-ad0aede2-4100-11ee-bc14-1c40244f5c21-ceph-exporter-ceph-iscsigw0 c746a4da2bbb   quay.io/prometheus/node-exporter:v1.5.0 "/bin/node_exporter …"   About an hour ago   Up About an hour ceph-ad0aede2-4100-11ee-bc14-1c40244f5c21-node-exporter-ceph-iscsigw0
root@ceph-iscsigw0:~# docker exec -it d677a8abd2d8 /bin/bash
[root@ceph-iscsigw0 /]# gwcli ls
o- / ......................................................................................................................... [...]   o- cluster ......................................................................................................... [Clusters: 1]   | o- ceph ............................................................................................................ [HEALTH_OK]   |   o- pools .......................................................................................................... [Pools: 5]   |   | o- .mgr .................................................................. [(x3), Commit: 0.00Y/33602764M (0%), Used: 3184K]   |   | o- 1T-r3-01 ................................................................ [(x3), Commit: 0.00Y/5793684M (0%), Used: 108K]   |   | o- ace0 ................................................................... [(2+1), Commit: 0.00Y/11587368M (0%), Used: 24K]   |   | o- ace1 ................................................................... [(2+1), Commit: 0.00Y/55665220M (0%), Used: 12K]   |   | o- x ....................................................................... [(x3), Commit: 0.00Y/33602764M (0%), Used: 36K]   |   o- topology ............................................................................................... [OSDs: 44,MONs: 5]   o- disks ....................................................................................................... [0.00Y, Disks: 0]   o- iscsi-targets ............................................................................... [DiscoveryAuth: None, Targets: 1]     o- iqn.2001-07.com.ceph:1692892702115 ................................................................ [Auth: None, Gateways: 2]       o- disks .......................................................................................................... [Disks: 0]       o- gateways ............................................................................................ [Up: 2/2, Portals: 2]       | o- ceph-iscsigw0 ............................................................................ [10.202.5.21,10.202.4.21 (UP)]       | o- ceph-iscsigw1 ............................................................................ [10.202.3.21,10.202.2.21 (UP)]       o- host-groups .................................................................................................. [Groups : 0]       o- hosts ....................................................................................... [Auth: ACL_ENABLED, Hosts: 0]
[root@ceph-iscsigw0 /]#

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux