Re: iscsi and distributed volume

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 02, 2015 at 12:08:00AM +0000, Jon Heese wrote:
> Dan,
> 
> I've read your blog post about this, but I've been unable to find a way 
> to install this "plugin" on CentOS 6 for use with tgtd.
> 
> There appears to be a "scsi-target-utils-gluster" RPM out there that has 
> what appears to be a module that would accomplish this, but I can only 
> find this package for EL7-based OSes.
> 
> Do I have to build the module myself for tgtd on CentOS 6?  If so, do 
> you have instructions to do so?  Thanks.

This definitely sounds as if we should get it included in the CentOS
Storage SIG repositories.

    http://wiki.centos.org/SpecialInterestGroup/Storage

I am not sure yet how packages get added to the SIG, Lala and Humble on
CC should be able to explain/help with that.

For now, rebuilding your own package seems needed :-/ I would start with
the EL7 version and build that on a EL6 system with glusterfs-api-devel
installed.

HTH,
Niels

> 
> Regards,
> Jon Heese
> 
> On 4/1/2015 4:21 PM, Dan Lambright wrote:
> > incidentally , for all you iSCSI on gluster fans.. gluster has a "plugin" to LIO and the target daemon (tgt). The plugin makes it so the server can send IO directly between the iSCSI server and gluster process in user space (as opposed to routing it all through FUSE). Its a nice speed up, in case anyone is looking for a performance bump :)
> >
> > ----- Original Message -----
> >> From: "Jon Heese" <jonheese@xxxxxxxxxxxx>
> >> To: "Gluster-users@xxxxxxxxxxx List" <gluster-users@xxxxxxxxxxx>
> >> Sent: Wednesday, April 1, 2015 3:20:41 PM
> >> Subject: Re:  iscsi and distributed volume
> >>
> >> Or use multipath I/O (assuming your iSCSI initiator OS supports it) to mount
> >> the iSCSI LUN on both nodes in an active/passive manner.
> >>
> >> I do this with tgtd directly on the Gluster nodes to serve up iSCSI disks
> >> from an image file sitting on a replicated volume to a VMware ESXi 5.5
> >> cluster.
> >>
> >> If you go this route, be sure to configure the iSCSI initiator(s) multipath
> >> to be active/passive (or similar) as my testing with round-robin produced
> >> very poor performance and data corruption.
> >>
> >> Regards,
> >> Jon Heese
> >> ________________________________________
> >> From: gluster-users-bounces@xxxxxxxxxxx <gluster-users-bounces@xxxxxxxxxxx>
> >> on behalf of Paul Robert Marino <prmarino1@xxxxxxxxx>
> >> Sent: Wednesday, April 01, 2015 2:59 PM
> >> To: Dan Lambright
> >> Cc: Gluster-users@xxxxxxxxxxx List
> >> Subject: Re:  iscsi and distributed volume
> >>
> >> You do realize you would have to put the ISCSI target disk image on
> >> the mounted Gluster volume not directly on the brick.
> >> So as long as you have replication your volume would remain accessible.
> >> You can not point the ISCSI process directly to the brick or
> >> replication and striping wont work properly.
> >> That said you could consider using something like keepalived with a
> >> monitoring script to handle a VIP for failover in case a node or some
> >> of the underlying processes go down.
> >>
> >>
> >> On Wed, Apr 1, 2015 at 10:17 AM, Dan Lambright <dlambrig@xxxxxxxxxx> wrote:
> >>>
> >>>
> >>> ----- Original Message -----
> >>>> From: "Roman" <romeo.r@xxxxxxxxx>
> >>>> To: gluster-users@xxxxxxxxxxx
> >>>> Sent: Wednesday, April 1, 2015 4:38:50 AM
> >>>> Subject:  iscsi and distributed volume
> >>>>
> >>>> Hi devs, list!
> >>>>
> >>>> I've got somewhat simple but in same time pretty difficult question. But
> >>>> I'm
> >>>> running glusterf in production and don't have any option to test myself :(
> >>>>
> >>>> say I've got a distributed gluster volume of 2x350GB
> >>>> I want to export ISCSI target for M$ server and I want it to be 600GB.
> >>>> I understand, that when I create a large file for ISCSI target with dd, it
> >>>> will be distributed between two bricks. And here comes the question:
> >>>>
> >>>> What will happen when
> >>>>
> >>>> 1. one of bricks goes down? Ok, simple - target won't be accessible.
> >>>> 2. would be data available again, when the brick comes back up? (ie
> >>>> failure
> >>>> due to network or power)
> >>>>
> >>>> yes, we have backup server and ups and generator, as we are running DC,
> >>>> but
> >>>> I'm just curious if we will have to restore the data from backups or it
> >>>> will
> >>>> be available after brick comes back up?
> >>>
> >>> What kind of gluster volume is it- I would hope it is replicated?
> >>>
> >>> Data within the file is not distributed between two bricks, unless your
> >>> volume type is striped.
> >>>
> >>> Assuming its replicated, if one brick went down, the other replica would
> >>> continue to operate, so you would have availability.
> >>>
> >>>
> >>>>
> >>>>
> >>>>
> >>>> --
> >>>> Best regards,
> >>>> Roman.
> >>>>
> >>>> _______________________________________________
> >>>> Gluster-users mailing list
> >>>> Gluster-users@xxxxxxxxxxx
> >>>> http://www.gluster.org/mailman/listinfo/gluster-users
> >>> _______________________________________________
> >>> Gluster-users mailing list
> >>> Gluster-users@xxxxxxxxxxx
> >>> http://www.gluster.org/mailman/listinfo/gluster-users
> >> _______________________________________________
> >> Gluster-users mailing list
> >> Gluster-users@xxxxxxxxxxx
> >> http://www.gluster.org/mailman/listinfo/gluster-users
> >> _______________________________________________
> >> Gluster-users mailing list
> >> Gluster-users@xxxxxxxxxxx
> >> http://www.gluster.org/mailman/listinfo/gluster-users
> >>
> >
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://www.gluster.org/mailman/listinfo/gluster-users

Attachment: pgpUQH6JFhiHy.pgp
Description: PGP signature

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux