Re: iSCSI to a Ceph node with 2 network adapters - how to ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/01/2018 02:01 AM, Wladimir Mutel wrote:
>     Dear all,
> 
>     I am experimenting with Ceph setup. I set up a single node
>     (Asus P10S-M WS, Xeon E3-1235 v5, 64 GB RAM, 8x3TB SATA HDDs,
>     Ubuntu 18.04 Bionic, Ceph packages from
>     http://download.ceph.com/debian-luminous/dists/xenial/
>     and iscsi parts built manually per
> http://docs.ceph.com/docs/master/rbd/iscsi-target-cli-manual-install/)
>     Also i changed 'chooseleaf ... host' into 'chooseleaf ... osd'
>     in the CRUSH map to run with single host.
> 
>     I have both its Ethernets connected to the same LAN,
>     with different IPs in the same subnet
>     (like, 192.168.200.230/24 and 192.168.200.231/24)
>     mon_host in ceph.conf is set to 192.168.200.230,
>     and ceph daemons (mgr, mon, osd) are listening to this IP.
> 
>     What I would like to finally achieve, is to provide multipath
>     iSCSI access through both these Ethernets to Ceph RBDs,
>     and apparently, gwcli does not allow me to add a second
>     gateway to the same target. It is going like this :
> 
> /iscsi-target> create iqn.2018-06.host.test:test
> ok
> /iscsi-target> cd iqn.2018-06.host.test:test/gateways
> /iscsi-target...test/gateways> create p10s 192.168.200.230 skipchecks=true
> OS version/package checks have been bypassed
> Adding gateway, sync'ing 0 disk(s) and 0 client(s)
> ok
> /iscsi-target...test/gateways> create p10s2 192.168.200.231 skipchecks=true
> OS version/package checks have been bypassed
> Adding gateway, sync'ing 0 disk(s) and 0 client(s)
> Failed : Gateway creation failed, gateway(s)
> unavailable:192.168.200.231(UNKNOWN state)
> 
>     host names are defined in /etc/hosts as follows :
> 
> 192.168.200.230 p10s
> 192.168.200.231 p10s2
> 
>     so I suppose that something does not listen on 192.168.200.231, but
> I don't have an idea what is that thing and how to make it listen there.
> Or how to achieve this goal (utilization of both Ethernets for iSCSI) in
> different way. Shoud I aggregate Ethernets into a 'bond' interface with

There are multiple issues here:

1. LIO does not really support multiple IPs on the same subnet on the
same system out of the box. The network routing will kick in and
sometimes if the initiator sent something to .230, the target would
respond from .231 and I think for operations like logins it will not go
as planned in the iscsi target layer as the code that manages
connections gets thrown off. On the initiator side you it works when
using ifaces because we use SO_BINDTODEVICE to tell the net layer to use
the specific netdev, but there is no code like that in the target. So on
the target, I think it just depends on the routing table setup and you
have to modify that. I think there might be a bug though.

In general I think different subnet is easiest and best for most cases.

2. Ceph-iscsi does not support multiple IPs on the same gw right now,
because you can hit the issue where a WRITE is sent down path1, that
gets stuck, then the initiator fails over to path2 and sends the STPG
there. That will go down a different path and so the WRITE in path 1 is
not flushed like we need. Because both paths are accessing the same rbd
client then the rbd locking/blacklisting would not kick in like when
this is done on different gws.

So for both you would/could just use networ level bonding.

> single IP ? Should I build and use 'lrbd' tool instead of 'gwcli' ? Is

Or you can use lrbd but for that make sure you are using the SUSE kernel
as they have the special timeout code.

> it acceptable that I run kernel 4.15, not 4.16+ ?
> What other directions could you give me on this task ?
> Thanks in advance for your replies.
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux