Re: iSCSI to a Ceph node with 2 network adapters - how to ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It is worth asking - why do you want to have two interfaces?
If you have 1Gbps interfaces and this is a bandwidth requirement then 10Gbps cards and switches are very cheap these days.

On 1 June 2018 at 10:37, Panayiotis Gotsis <pgotsis@xxxxxxxxxxxx> wrote:
Hello

Bonding and iscsi are not a best practice architecture. Multipath is,
however I can attest to problems with the multipathd and debian.

In any case, what you should try to do and check is:

1) Use two vlans, one for each ethernet port, with different ip
address space. Your initiators on the hosts will then be able to
discover two iscsi targets.
2) You should ensure that ping between host interfaces and iscsi
targets is working. You should ensure that the iscsi target daemon is
up (through the use of netstat for example) for each one of the two
ip addresses/ethernet interfaces
3) Check multipath configuration


On 18-06-01 05:08 +0200, Marc Roos wrote:


Indeed, you have to add routes and rules to routing table. Just bond
them.


-----Original Message-----
From: John Hearns [mailto:hearnsj@xxxxxxxxxxxxxx]
Sent: vrijdag 1 juni 2018 10:00
To: ceph-users
Subject: Re: iSCSI to a Ceph node with 2 network adapters -
how to ?

Errr....   is this very wise ?

I have both its Ethernets connected to the same LAN,
       with different IPs in the same subnet
       (like, 192.168.200.230/24 and 192.168.200.231/24)


In my experience setting up to interfaces on the same subnet means that
your ssystem doesnt know which one to route traffic through...







On 1 June 2018 at 09:01, Wladimir Mutel <mwg@xxxxxxxxx> wrote:


                Dear all,
       
                I am experimenting with Ceph setup. I set up a single node
                (Asus P10S-M WS, Xeon E3-1235 v5, 64 GB RAM, 8x3TB SATA
HDDs,
                Ubuntu 18.04 Bionic, Ceph packages from
                http://download.ceph.com/debian-luminous/dists/xenial/
<http://download.ceph.com/debian-luminous/dists/xenial/>
                and iscsi parts built manually per
        http://docs.ceph.com/docs/master/rbd/iscsi-target-cli-manual-instal
l/
<http://docs.ceph.com/docs/master/rbd/iscsi-target-cli-manual-install/> )
                Also i changed 'chooseleaf ... host' into 'chooseleaf ... osd'
                in the CRUSH map to run with single host.
       
                I have both its Ethernets connected to the same LAN,
                with different IPs in the same subnet
                (like, 192.168.200.230/24 and 192.168.200.231/24)
                mon_host in ceph.conf is set to 192.168.200.230,
                and ceph daemons (mgr, mon, osd) are listening to this IP.
       
                What I would like to finally achieve, is to provide
multipath
                iSCSI access through both these Ethernets to Ceph RBDs,
                and apparently, gwcli does not allow me to add a second
                gateway to the same target. It is going like this :
       
        /iscsi-target> create iqn.2018-06.host.test:test
        ok
        /iscsi-target> cd iqn.2018-06.host.test:test/gateways
        /iscsi-target...test/gateways> create p10s 192.168.200.230
skipchecks=true
        OS version/package checks have been bypassed
        Adding gateway, sync'ing 0 disk(s) and 0 client(s)
        ok
        /iscsi-target...test/gateways> create p10s2 192.168.200.231
skipchecks=true
        OS version/package checks have been bypassed
        Adding gateway, sync'ing 0 disk(s) and 0 client(s)
        Failed : Gateway creation failed, gateway(s)
unavailable:192.168.200.231(UNKNOWN state)
       
                host names are defined in /etc/hosts as follows :
       
        192.168.200.230 p10s
        192.168.200.231 p10s2
       
                so I suppose that something does not listen on
192.168.200.231, but I don't have an idea what is that thing and how to
make it listen there. Or how to achieve this goal (utilization of both
Ethernets for iSCSI) in different way. Shoud I aggregate Ethernets into
a 'bond' interface with single IP ? Should I build and use 'lrbd' tool
instead of 'gwcli' ? Is it acceptable that I run kernel 4.15, not 4.16+
?
        What other directions could you give me on this task ?
        Thanks in advance for your replies.
        _______________________________________________
        ceph-users mailing list
        ceph-users@xxxxxxxxxxxxxx
        http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
       



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

--
--
Panayiotis Gotsis
Systems & Services Engineer
Network Operations Center
GRNET - Networking Research and Education
7, Kifisias Av., 115 23, Athens
t: +30 210 7471091 | f: +30 210 7474490

Follow us: www.grnet.gr
Twitter: @grnet_gr |Facebook: @grnet.gr
LinkedIn: grnet |YouTube: GRNET EDET

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux