Re: Gluster for Vmware

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Another way to use gluster and vmware is with iSCSI. 
I've not tried nor heard of anyone multi pathing iSCSI to gluster from VmWare. But point to point should work fine.

http://www.gluster.org/community/documentation/index.php/GlusterFS_iSCSI

----- Original Message -----
> From: "Michael DePaulo" <mikedep333@xxxxxxxxx>
> To: "Peter Auyeung" <pauyeung@xxxxxxxxxxxxx>
> Cc: "gluster-users" <gluster-users@xxxxxxxxxxx>
> Sent: Sunday, August 31, 2014 12:17:24 AM
> Subject: Re:  Gluster for Vmware
> 
> 
> 
> Hi Peter,
> 
> On Aug 26, 2014 2:18 AM, "Peter Auyeung" < pauyeung@xxxxxxxxxxxxx > wrote:
> > 
> > You can use ctdb for nfs failover
> > 
> > But I do want to know if we can use native glusterfs client for vmware
> > 
> > Peter
> 
> I did done research and it looks like the native GlusterFS client is not
> available for ESXi. The primary reason is probably that FUSE is not
> available for ESXi.
> 
> If you do not need all the features of vSphere, you might try VMware
> Workstation running in server mode on top of Linux. I'm sure that will work
> with GlusterFS.
> http://blogs.vmware.com/workstation/2012/02/vmware-workstation-8-as-an-alternative-to-vmware-server.html
> 
> -Mike
> 
> > On Aug 25, 2014, at 9:28 PM, "Chandrahasa S" < chandrahasa.s@xxxxxxx >
> > wrote:
> > 
> >> I am thinking to User Gluster Volume for creation of Data store in Vmware.
> >> 
> >> I can use NFS but I wont get HA and Load balancing like glusterfs.
> >> 
> >> Chandra
> >> 
> >> 
> >> 
> >> From: "John G. Heim" < jheim@xxxxxxxxxxxxx >
> >> To: Chandrahasa S < chandrahasa.s@xxxxxxx >, gluster-users@xxxxxxxxxxx
> >> Date: 08/25/2014 06:49 PM
> >> Subject: Re:  Gluster for Vmware
> >> ________________________________
> >> 
> >> 
> >> 
> >> Do you mean you want to mount a gluster volume on a virtual machine? You
> >> can do that the same way you'd do it on a real machine. You can probably
> >> even create a brick on a virtual machine but I don't see much point in
> >> that.
> >> 
> >> But we regularly mount our gluster volume on virtual machines. We use
> >> debian so it's as simple as this:
> >> 
> >> 1. service glusterfs-server start
> >> 2. mount -t glusterfs localhost:/volumename /mountpoint
> >> 
> >> 
> >> 
> >> On 08/25/2014 12:06 AM, Chandrahasa S wrote:
> >> Dear All,
> >> 
> >> Is there any way to use Glusterfs volume for Vmware environment.
> >> 
> >> 
> >> Chandra.
> >> 
> >> 
> >> 
> >> From: Ben Turner < bturner@xxxxxxxxxx >
> >> To: Juan José Pavlik Salles < jjpavlik@xxxxxxxxx >
> >> Cc: gluster-users@xxxxxxxxxxx
> >> Date: 08/22/2014 08:57 PM
> >> Subject: Re:  Gluster 3.5.2 gluster, how does cache work?
> >> Sent by: gluster-users-bounces@xxxxxxxxxxx
> >> ________________________________
> >> 
> >> 
> >> 
> >> ----- Original Message -----
> >> > From: "Juan José Pavlik Salles" < jjpavlik@xxxxxxxxx >
> >> > To: gluster-users@xxxxxxxxxxx
> >> > Sent: Thursday, August 21, 2014 4:07:28 PM
> >> > Subject:  Gluster 3.5.2 gluster, how does cache work?
> >> > 
> >> > Hi guys, I've been reading a bit about caching in gluster volumes, but I
> >> > still don't get a few things. I set up a gluster replica 2 volume like
> >> > this:
> >> > 
> >> > [root@gluster-test-1 ~]# gluster vol info vol_rep
> >> > Volume Name: vol_rep
> >> > Type: Replicate
> >> > Volume ID: b77db06d-2686-46c7-951f-e43bde21d8ec
> >> > Status: Started
> >> > Number of Bricks: 1 x 2 = 2
> >> > Transport-type: tcp
> >> > Bricks:
> >> > Brick1: gluster-test-1:/ladrillos/l1/l
> >> > Brick2: gluster-test-2:/ladrillos/l1/l
> >> > Options Reconfigured:
> >> > performance.cache-min-file-size: 90MB
> >> > performance.cache-max-file-size: 256MB
> >> > performance.cache-refresh-timeout: 60
> >> > performance.cache-size: 256MB
> >> > [root@gluster-test-1 ~]#
> >> > 
> >> > Then I mounted the volume with gluster client on another machine. I
> >> > created
> >> > an 80Mbytes file called 80, and here you have the reading test:
> >> > 
> >> > [root@gluster-client-1 gluster_vol]# dd if=/mnt/gluster_vol/80
> >> > of=/dev/null
> >> > bs=1M
> >> > 80+0 records in
> >> > 80+0 records out
> >> > 83886080 bytes (84 MB) copied, 1,34145 s, 62,5 MB/s
> >> > [root@gluster-client-1 gluster_vol]# dd if=/mnt/gluster_vol/80
> >> > of=/dev/null
> >> > bs=1M
> >> > 80+0 records in
> >> > 80+0 records out
> >> > 83886080 bytes (84 MB) copied, 0,0246918 s, 3,4 GB/s
> >> > [root@gluster-client-1 gluster_vol]# dd if=/mnt/gluster_vol/80
> >> > of=/dev/null
> >> > bs=1M
> >> > 80+0 records in
> >> > 80+0 records out
> >> > 83886080 bytes (84 MB) copied, 0,0195678 s, 4,3 GB/s
> >> > [root@gluster-client-1 gluster_vol]#
> >> 
> >> You are seeing the effect of client side kernel caching. If you want to
> >> see the actual throughput for reads run:
> >> 
> >> sync; echo 3 > /proc/sys/vm/drop_caches; dd blah
> >> 
> >> Kernel caching happens on both the client and server side, when I want to
> >> see uncached performance I drop caches on both clients and servers:
> >> 
> >> run_drop_cache()
> >> {
> >> for host in $MASTERNODE $NODE $CLIENT
> >> do
> >> ssh -i /root/.ssh/my_id root@${host} echo "Dropping cache on $host"
> >> ssh -i /root/.ssh/my_id root@${host} sync
> >> ssh -i /root/.ssh/my_id root@${host} "echo 3 > /proc/sys/vm/drop_caches"
> >> done
> >> }
> >> 
> >> HTH!
> >> 
> >> -b
> >> 
> >> > Cache is working flawlessly, (even though that 80 Mbytes is smaller than
> >> > the
> >> > min-file-size value, but I don't care about it right now) what I don't
> >> > get
> >> > is where cache is being stored. Is it stored on the client side or on
> >> > the
> >> > server side? According to documentation, the io-cache translator could
> >> > be
> >> > loaded on both sides (client and server), how can I know where it is
> >> > being
> >> > loeaded? It looks like as it was being stored locally because of the
> >> > speed,
> >> > but I'd like to be sure.
> >> > 
> >> > Thanks!
> >> > 
> >> > --
> >> > Pavlik Salles Juan José
> >> > Blog - http://viviendolared.blogspot.com
> >> > 
> >> > _______________________________________________
> >> > Gluster-users mailing list
> >> > Gluster-users@xxxxxxxxxxx
> >> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> >> _______________________________________________
> >> Gluster-users mailing list
> >> Gluster-users@xxxxxxxxxxx
> >> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> >> 
> >> =====-----=====-----=====
> >> Notice: The information contained in this e-mail
> >> message and/or attachments to it may contain
> >> confidential or privileged information. If you are
> >> not the intended recipient, any dissemination, use,
> >> review, distribution, printing or copying of the
> >> information contained in this e-mail message
> >> and/or attachments to it are strictly prohibited. If
> >> you have received this communication in error,
> >> please notify us by reply e-mail or telephone and
> >> immediately and permanently delete the message
> >> and any attachments. Thank you
> >> 
> >> 
> >> 
> >> _______________________________________________
> >> Gluster-users mailing list
> >> Gluster-users@xxxxxxxxxxx
> >> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> >> 
> >> _______________________________________________
> >> Gluster-users mailing list
> >> Gluster-users@xxxxxxxxxxx
> >> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> > 
> > 
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users@xxxxxxxxxxx
> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users





[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux