RE: Vsftpd & Iscsi - fast enough ++

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



> -----Original Message-----
> From: centos-bounces@xxxxxxxxxx 
> [mailto:centos-bounces@xxxxxxxxxx] On Behalf Of Karl R. Balsmeier
> Sent: Tuesday, May 22, 2007 11:01 PM
> To: CentOS mailing list
> Subject: Re:  Vsftpd & Iscsi - fast enough ++
> 
> Ross S. W. Walker wrote:
> >> -----Original Message-----
> >> From: centos-bounces@xxxxxxxxxx 
> >> [mailto:centos-bounces@xxxxxxxxxx] On Behalf Of chrism@xxxxxxxxx
> >> Sent: Tuesday, May 22, 2007 7:07 PM
> >> To: CentOS mailing list
> >> Subject: Re:  Vsftpd & Iscsi - fast enough
> >>
> >> Ross S. W. Walker wrote:
> >>     
> >>> Well there goes the neighborhood... Now I have to talk memory in
> >>> MiB and GiB and comm and storage in MB and GB.
> >>>
> >>> Anyways datacom and storage has always been base 10, why, 
> well I'll
> >>> leave that to the conspiracy theorists.
> >>>   
> >>>       
> >> Perhaps it would be possible to leave the semantics games 
> >> alone and just 
> >> answer the guy's question?  I don't have any personal 
> experience with 
> >> iSCSI or I would try to do that.
> >>     
> >
> > The question was answered earlier and what does your 
> comment contribute?
> >
> > Jeez, there is nothing like a me-too troll to suck the fun out of a
> > thread.
> >
> > -Ross
> >   
> well, he (Chris) does have a point.  I was excited to see my question 
> get so many responses.  But I only got one 
> enterprise-relevant answer, 
> from Matt Shields (thanks matt!).  We definitely got 
> side-tracked here, 
> har.  Let us chase down this iSCSI vs  rsync pros/cons 
> question a little 
> more...
>
> I have sort of ruled out doing iSCSI over GFS because of all 
> the moving 
> parts and some inherent dangers similar to what mentions 
> about stopping 
> data flows on such designs.  Am I that much better off to 
> ditch even the 
> iSCSI component and just stick with the current setup that 
> relies on rsync?

Let the list know what your current setup is.

If you are looking to have a single back-end storage system shared
with multiple front-end FTP servers, then you have a number of
choices:

- Use a cluster filesystem like GFS/OCFS etc. with a shared storage
system, either SCSI, Fiber or iSCSI.

- Utilize a network file system to a shared storage server, either
NFS, CIFS, or some other network file system protocol.

> I'd like to ditch rsync, and latency isn't that *big*of an 
> issue because 
> those boxes don't push more than 10-20 megs of data normally.  So is 
> there a case where I could extend onto iSCSI and see some 
> benefits vs. 
> staying with the FTP server pair & rsync?  I sort of asked 
> some of this 
> in another thread, but curious about what folks have to say. 

There are a lot of possibilities and not one single one will be
perfect. The trick is finding the one that handles the core problem
you have.

> -essentially the question is 'what's after rsync, when you don't have 
> fibre channel budget, and don't want to stoop so low as iATA called 
> Ethernet-over-IP'?

There are a lot of choices even when you take Fiber out of the picture.
 
> here's the GFS over iSCSI strategy....
> 
> essentially, you're going to abstract the one filesystem, pretending 
> it's "out" of any of the hosts (it can still be physically in the one 
> host), and create a GFS volume on it (by creating one or more 
> PV's into 
> the GFS LVM). 
> 
> then configure the other cluster members to mount the GFS cluster-fs 
> volume/filesystem using the GFS services over the iSCSI protocol.
> 
> that will allow all of the FTP servers to mount the shared volume 
> read/write concurrently.  no more rsync.

True, and if you are using a shared block storage solution then you
will need to use a clustered file system, but there is another
solution too...
 
> i'd use the iSCSI hba to expose the backup volume as an iSCSI 
> target and 
> attach to it from the other set members. 
> to do it right, you'll need to put up two additional hosts 
> and install 
> GFS & iSCSI services on all of them.

Yes, in affect creating a traditional, active-active FTP cluster.

> you'll need to be using GbE, preferably channel-aggregated, 
> if possible, 
> between the cluster members
> read won't be as fast as direct-attach scsi raid, but there 
> won't be any 
> rsync/cross-copy latency.

I utilize adaptive load balancing bonding (ALB) with iSCSI with
good results from multiple initiators. 802.3ad (?) or traditional
aggregated links doesn't do so hot as it is per-path instead of
per-packet.

> if load not too high on daily basis, maybe one gbe per host 
> dedicated to 
> the iSCSI/GFS sync/locking traffic, and another to reach the 
> outside world

How about this idea for size:

iSCSI storage server backend serving up volumes to a Xen server
front-end which then provides NFS/CIFS network file system
access to multiple local PV FTP servers.

You can then add a second iSCSI server later and use DRBD 8.X
in a multiple primary setup doing active-active block-level
replication between two storage servers, which then provides
storage for two Xen server front-ends that can use something
like heartbeat to fail-over virtual machines in the event
of a Xen server failure.

Then you can use MPIO in round-robin or fail-over mode (or
a combination) between the two back-end iSCSI servers and
the two front-end Xen servers.


> -----snip----------
<snip>

______________________________________________________________________
This e-mail, and any attachments thereto, is intended only for use by
the addressee(s) named herein and may contain legally privileged
and/or confidential information. If you are not the intended recipient
of this e-mail, you are hereby notified that any dissemination,
distribution or copying of this e-mail, and any attachments thereto,
is strictly prohibited. If you have received this e-mail in error,
please immediately notify the sender and permanently delete the
original and any copy or printout thereof.

_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos

[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux