Re: slow using ISCSI - Help-me

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In addition: I turned off one of the GWs, and with just one it works fine. When the two go up, one of the images is changing the "active / optimized" all time (where generates the logs above) and everything is extremely slow.

I'm using:
tcmu-runner-1.4
ceph-iscsi-3.3
ceph 13.2.7

Regards,
Gesiel

Em ter., 24 de dez. de 2019 às 09:09, Gesiel Galvão Bernardes <gesiel.bernardes@xxxxxxxxx> escreveu:
Hi,

I am having an unusual slowdown using VMware with ISCSI gws. I have two ISCSI gateways with two RBD images. I have checked the following in the logs:

Dec 24 09:00:26 ceph-iscsi2 tcmu-runner: 2019-12-24 09:00:26.040 969 [INFO] alua_implicit_transition:562 rbd/pool1.vmware_iscsi1: Starting lock acquisition operation.2019-12-24 09:00:26.040 969 [INFO] alua_implicit_transition:557 rbd/pool1.vmware_iscsi1: Lock acquisition operation is already in process.2019-12-24 09:00:26.973 969 [WARN] tcmu_rbd_lock:744 rbd/pool1.vmware_iscsi1: Acquired exclusive lock.
Dec 24 09:00:26 ceph-iscsi2 tcmu-runner: tcmu_rbd_lock:744 rbd/pool1.vmware_iscsi1: Acquired exclusive lock.
Dec 24 09:00:28 ceph-iscsi2 tcmu-runner: 2019-12-24 09:00:28.099 969 [WARN] tcmu_notify_lock_lost:201 rbd/pool1.vmware_iscsi1: Async lock drop. Old state 1
Dec 24 09:00:28 ceph-iscsi2 tcmu-runner: tcmu_notify_lock_lost:201 rbd/pool1.vmware_iscsi1: Async lock drop. Old state 1
Dec 24 09:00:28 ceph-iscsi2 tcmu-runner: alua_implicit_transition:562 rbd/pool1.vmware_iscsi1: Starting lock acquisition operation.
Dec 24 09:00:28 ceph-iscsi2 tcmu-runner: 2019-12-24 09:00:28.824 969 [INFO] alua_implicit_transition:562 rbd/pool1.vmware_iscsi1: Starting lock acquisition operation.2019-12-24 09:00:28.990 969 [WARN] tcmu_rbd_lock:744 rbd/pool1.vmware_iscsi1: Acquired exclusive lock.
Dec 24 09:00:28 ceph-iscsi2 tcmu-runner: tcmu_rbd_lock:744 rbd/pool1.vmware_iscsi1: Acquired exclusive lock.


Can anyone help-me please?

Gesiel


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux