Re: slow using ISCSI - Help-me

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 01/20/2020 10:29 AM, Gesiel Galvão Bernardes wrote:
> Hi, 
> 
> Only now have I been able to act on this problem. My environment is
> relatively simple: I have two ESXi 6.7 hosts, connected to two ISCSI
> gateways, using two RBD images.
> 
> When this mail started, the workaround was to keep only one ISCSI
> gateway connected, so it works normally. After the answer that received
> here that the problem could be in the configuration of VMWare, I
> reviewed the configuration of both (they are exactly according to Ceph
> documentation), and rebooted both.
> 

Can you give me some basic info.

The output of:
# "gwcli ls"

from one of the iscsi nodes, and give me the output of:

# targetcli ls

from both iscsi nodes.

The ceph-iscsi and tcmu-runner versions and did you build them yourself,
get them from the ceph repos or get it from a distro repo.


> It turns out, now the gateway "ceph-iscsi1" is working. When I turn on
> "ceph-iscsi2" it appears as "Active / Not optimized" for the two RBD
> images (before was an Active / Optimized for each image), and if I turn
> off "ceph-iscsi1" it (ceph-iscs2) remains as "Active / No optimized "and
> images are unavailable.

On the ESX side can you give me the output of:

esxcli storage nmp path list -d disk_id
esxcli storage core device list -d disk_id
esxcli storage nmp device list -d disk_id
esxcli storage nmp satp list

and the /var/log/vmkernel.log for when you stop a node and the image
goes to the unavailable state.



> 
> Nothing is recorded in the logs. Can you help me?
> 
> Em qui., 26 de dez. de 2019 às 17:44, Mike Christie <mchristi@xxxxxxxxxx
> <mailto:mchristi@xxxxxxxxxx>> escreveu:
> 
>     On 12/24/2019 06:40 AM, Gesiel Galvão Bernardes wrote:
>     > In addition: I turned off one of the GWs, and with just one it works
>     > fine. When the two go up, one of the images is changing the "active /
>     > optimized" all time (where generates the logs above) and everything is
>     > extremely slow.
> 
>     Your multipathing in ESX is probably misconfigured and you have set it
>     up for active active, or one host can't see all the iscsi paths either
>     because it's not logged into all the sessions or because the network is
>     not up on one of the paths.
> 
> 
>     >
>     > I'm using:
>     > tcmu-runner-1.4
>     > ceph-iscsi-3.3
>     > ceph 13.2.7
>     >
>     > Regards,
>     > Gesiel
>     >
>     > Em ter., 24 de dez. de 2019 às 09:09, Gesiel Galvão Bernardes
>     > <gesiel.bernardes@xxxxxxxxx <mailto:gesiel.bernardes@xxxxxxxxx>
>     <mailto:gesiel.bernardes@xxxxxxxxx
>     <mailto:gesiel.bernardes@xxxxxxxxx>>> escreveu:
>     >
>     >     Hi,
>     >
>     >     I am having an unusual slowdown using VMware with ISCSI gws. I
>     have
>     >     two ISCSI gateways with two RBD images. I have checked the
>     following
>     >     in the logs:
>     >
>     >     Dec 24 09:00:26 ceph-iscsi2 tcmu-runner: 2019-12-24
>     09:00:26.040 969
>     >     [INFO] alua_implicit_transition:562 rbd/pool1.vmware_iscsi1:
>     >     Starting lock acquisition operation.2019-12-24 09:00:26.040 969
>     >     [INFO] alua_implicit_transition:557 rbd/pool1.vmware_iscsi1: Lock
>     >     acquisition operation is already in process.2019-12-24
>     09:00:26.973
>     >     969 [WARN] tcmu_rbd_lock:744 rbd/pool1.vmware_iscsi1: Acquired
>     >     exclusive lock.
>     >     Dec 24 09:00:26 ceph-iscsi2 tcmu-runner: tcmu_rbd_lock:744
>     >     rbd/pool1.vmware_iscsi1: Acquired exclusive lock.
>     >     Dec 24 09:00:28 ceph-iscsi2 tcmu-runner: 2019-12-24
>     09:00:28.099 969
>     >     [WARN] tcmu_notify_lock_lost:201 rbd/pool1.vmware_iscsi1:
>     Async lock
>     >     drop. Old state 1
>     >     Dec 24 09:00:28 ceph-iscsi2 tcmu-runner: tcmu_notify_lock_lost:201
>     >     rbd/pool1.vmware_iscsi1: Async lock drop. Old state 1
>     >     Dec 24 09:00:28 ceph-iscsi2 tcmu-runner:
>     >     alua_implicit_transition:562 rbd/pool1.vmware_iscsi1: Starting
>     lock
>     >     acquisition operation.
>     >     Dec 24 09:00:28 ceph-iscsi2 tcmu-runner: 2019-12-24
>     09:00:28.824 969
>     >     [INFO] alua_implicit_transition:562 rbd/pool1.vmware_iscsi1:
>     >     Starting lock acquisition operation.2019-12-24 09:00:28.990 969
>     >     [WARN] tcmu_rbd_lock:744 rbd/pool1.vmware_iscsi1: Acquired
>     exclusive
>     >     lock.
>     >     Dec 24 09:00:28 ceph-iscsi2 tcmu-runner: tcmu_rbd_lock:744
>     >     rbd/pool1.vmware_iscsi1: Acquired exclusive lock.
>     >
>     >
>     >     Can anyone help-me please?
>     >
>     >     Gesiel
>     >
>     >
>     >
>     >
>     > _______________________________________________
>     > ceph-users mailing list -- ceph-users@xxxxxxx
>     <mailto:ceph-users@xxxxxxx>
>     > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>     <mailto:ceph-users-leave@xxxxxxx>
>     >
> 
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux