Re: Ceph iSCSI Questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Sep 4, 2020 at 11:54 AM <DHilsbos@xxxxxxxxxxxxxx> wrote:
>
> All;
>
> We've used iSCSI to support virtualization for a while, and have used multi-pathing almost the entire time.  Now, I'm looking to move from our single box iSCSI hosts to iSCSI on Ceph.
>
> We have 2 independent, non-routed, subnets assigned to iSCSI (let's call them 192.168.250.0/24 and 192.168.251.0/24).  These subnets are hosted in VLANs 250 and 251, respectively, on our switches.  Currently; each target and each initiator have a dedicated network port for each subnet (i.e. 2 NIC  per target & 2 NIC per initiator).
>
> I have 2 server prepared to setup as Ceph iSCSI targets (let's call them ceph-iscsi1 & cpeh-iscsi2), and I'm wondering about their network configurations.  My initial plan is to configure one on the 250 network, and the other on the 251 network.
>
> Would it be possible to have both servers on both networks?  In other words, can I give ceph-iscsi1 both 192.168.250.200 and 192.168.251.200, and ceph-iscsi2 192.168.250.201 and 192.168.251.201?

When defining the gateways via gwcli or the dashboard, you should be
able to specify a comma-separated list of portal IP addresses.

> If that works, I would expect the initiators to see 4 paths to each portal, correct?

Correct:

$ gwcli
/iscsi-target...ample:target1> ls
o- iqn.2020-01.com.example:target1
.......................................................................
[Auth: None, Gateways: 2]
  o- disks ..............................................................................................................
[Disks: 1]
  | o- rbd/image1
.....................................................................................
[Owner: ceph-iscsi0, Lun: 0]
  o- gateways ................................................................................................
[Up: 2/2, Portals: 2]
  | o- ceph-iscsi0
............................................................................
[192.168.42.30,192.168.121.192 (UP)]
  | o- ceph-iscsi1
.............................................................................
[192.168.121.23,192.168.42.31 (UP)]
  o- host-groups
......................................................................................................
[Groups : 0]
  o- hosts ...........................................................................................
[Auth: ACL_ENABLED, Hosts: 1]
    o- iqn.2020-01.com.example:client1
..................................................................
[Auth: CHAP, Disks: 1(1G)]
      o- lun 0 ................................................................................
[rbd/image1(1G), Owner: ceph-iscsi0]

$ iscsiadm -m discovery -t st -p 192.168.42.30
192.168.42.30:3260,1 iqn.2020-01.com.example:target1
192.168.121.192:3260,1 iqn.2020-01.com.example:target1
192.168.121.23:3260,2 iqn.2020-01.com.example:target1
192.168.42.31:3260,2 iqn.2020-01.com.example:target1

$ iscsiadm -m node -T iqn.2020-01.com.example:target1 -l
Logging in to [iface: default, target:
iqn.2020-01.com.example:target1, portal: 192.168.42.30,3260]
Logging in to [iface: default, target:
iqn.2020-01.com.example:target1, portal: 192.168.121.192,3260]
Logging in to [iface: default, target:
iqn.2020-01.com.example:target1, portal: 192.168.121.23,3260]
Logging in to [iface: default, target:
iqn.2020-01.com.example:target1, portal: 192.168.42.31,3260]
Login to [iface: default, target: iqn.2020-01.com.example:target1,
portal: 192.168.42.30,3260] successful.
Login to [iface: default, target: iqn.2020-01.com.example:target1,
portal: 192.168.121.192,3260] successful.
Login to [iface: default, target: iqn.2020-01.com.example:target1,
portal: 192.168.121.23,3260] successful.
Login to [iface: default, target: iqn.2020-01.com.example:target1,
portal: 192.168.42.31,3260] successful.

$ multipath -ll
Sep 04 17:37:02 | device config in /etc/multipath.conf missing vendor
or product parameter
mpatha (36001405974841ad4e2746f5bdd96c743) dm-0 LIO-ORG,TCMU device
size=1.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 2:0:0:0 sdc 8:32 active ready running
|-+- policy='service-time 0' prio=50 status=enabled
| `- 3:0:0:0 sda 8:0  active ready running
|-+- policy='service-time 0' prio=10 status=enabled
| `- 5:0:0:0 sdd 8:48 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  `- 4:0:0:0 sdb 8:16 active ready running


> Thank you,
>
> Dominic L. Hilsbos, MBA
> Director - Information Technology
> Perform Air International Inc.
> DHilsbos@xxxxxxxxxxxxxx
> www.PerformAir.com
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>


-- 
Jason
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux