Re: ceph iscsi questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 18, 2013 at 11:13:15AM +0200, Leen Besselink wrote:
> On Tue, Jun 18, 2013 at 09:52:53AM +0200, Kurt Bauer wrote:
> > Hi,
> > 
> > 
> > Da Chun schrieb:
> > > Hi List,
> > >
> > > I want to deploy a ceph cluster with latest cuttlefish, and export it
> > > with iscsi interface to my applications.
> > > Some questions here:
> > > 1. Which Linux distro and release would you recommend? I used Ubuntu
> > > 13.04 for testing purpose before.
> > For the ceph-cluster or the "iSCSI-GW"? We use Ubuntu 12.04 LTS for the
> > cluster and the iSCSI-GW, but tested Debian wheezy as iSCSI-GW too. Both
> > work flawless.
> > > 2. Which iscsi target is better? LIO, SCST, or others?
> > Have you read http://ceph.com/dev-notes/adding-support-for-rbd-to-stgt/
> > ? That's what we do and it works without problems so far.
> > 
> > > 3. The system for the iscsi target will be a single point of failure.
> > > How to eliminate it and make good use of ceph's nature of distribution?
> > That's a question we asked aourselves too. In theory one can set up 2
> > iSCSI-GW and use multipath but what does that do to the cluster? Will
> > smth. break if 2 iSCSI targets use the same rbd image in the cluster?
> > Even if I use failover-mode only?
> > 
> > Has someone already tried this and is willing to share their knowledge?
> > 
> 
> Let's see.
> 
> You mentioned HA and multipath.
> 
> You don't really need multipath for a HA iSCSI-target.
> 
> Multipath allows you to use multiple paths, multiple connections/networks/switches,
> but you don't want to connect an iSCSI-initiator to multiple iSCSI-targets (for the
> same LUN). That is asking for trouble.
> 

Probably I should add why you might want to use multipath because it does add resiliance
and also performance if one connection on the target is not enough.

I have a feeling when using multipath it is easiest to use multiple subnets.

> So multi-path just gives you extra paths.
> 
> When you have multiple iSCSI-targets, you use failover.
> 
> Most iSCSI-initiators can deal with at least up to 30 seconds of no responses from
> the iSCSI-target. No response, means no response. An error response is the wrong
> response of course.
> 
> So when using failover, a virtual IP-address is probably what you want.
> 
> Probably combined with something like Pacemaker to make sure multiple machines
> do not claim to have the same IP-address.
> 
> You'll need even more if you have multiple iSCSI-initiators that want to connect
> to the same rbd, like some Windows or VMWare cluster. And I guess Linux clustering
> filesystem like with OCFS2 probably need it too.
> 
> It's called SPC-3 Persistent Reservation.
> 
> As I understand Persistent Reservation, the iSCSI-target just needs to keep state
> for the connected initiators. On failover it isn't a problem if there is no state.
> So there is no state that needs to be replicated between multiple gateways. As long
> as all initiators are connected to the same target. When different initiators are
> connected to different targets, your data will get corrupted on write.
> 
> Now implementations:
> 
> - stgt does have some support for SPC-3, but not enough.
> - LIO supports SPC-3 Persist, it is the one in the current Linux kernels.
> - SCST seemed to much of a pain to set up to even try, but I might be wrong.
> - IET: iSCSI Enterprise Target, seems to support SPC-3 Persist, it's a DKMS package on Ubuntu
> - I later found out there is an implementation: http://www.peach.ne.jp/archives/istgt/ It too supports SPC-3 Persist. It is from the FreeBSD-camp and a package is available for Debian and Ubuntu with Linux-kernel and not just kFreeBSD. But I haven't tried it.
> 
> So I haven't tried them all yet. I have used LIO.
> 
> An other small tip: if you don't understand iSCSI, you'll might end up configure it the wrong
> way at first and it will be slow. You might need to spend time to figure out how to tune it.
> 
> Now you know what I know.
> 
> > Best regards,
> > Kurt
> > 
> > >
> > > Thanks!
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux