If you're asking if you can run VMWare images off of Gluster, then yes, you can. Gluster supports NFS mount points which VMWare ESX also supports.
Sent from my iPad Dear All,
Is there any way to use Glusterfs volume
for Vmware environment.
Chandra.
From:
Ben Turner <bturner@xxxxxxxxxx>
To:
Juan José Pavlik Salles
<jjpavlik@xxxxxxxxx>
Cc:
gluster-users@xxxxxxxxxxx
Date:
08/22/2014 08:57 PM
Subject:
Re:
Gluster 3.5.2 gluster, how does cache work?
Sent by:
gluster-users-bounces@xxxxxxxxxxx
----- Original Message -----
> From: "Juan José Pavlik Salles" <jjpavlik@xxxxxxxxx>
> To: gluster-users@xxxxxxxxxxx
> Sent: Thursday, August 21, 2014 4:07:28 PM
> Subject: Gluster 3.5.2 gluster, how does cache work?
>
> Hi guys, I've been reading a bit about caching in gluster volumes,
but I
> still don't get a few things. I set up a gluster replica 2 volume
like this:
>
> [root@gluster-test-1 ~]# gluster vol info vol_rep
> Volume Name: vol_rep
> Type: Replicate
> Volume ID: b77db06d-2686-46c7-951f-e43bde21d8ec
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: gluster-test-1:/ladrillos/l1/l
> Brick2: gluster-test-2:/ladrillos/l1/l
> Options Reconfigured:
> performance.cache-min-file-size: 90MB
> performance.cache-max-file-size: 256MB
> performance.cache-refresh-timeout: 60
> performance.cache-size: 256MB
> [root@gluster-test-1 ~]#
>
> Then I mounted the volume with gluster client on another machine.
I created
> an 80Mbytes file called 80, and here you have the reading test:
>
> [root@gluster-client-1 gluster_vol]# dd if=/mnt/gluster_vol/80 of=/dev/null
> bs=1M
> 80+0 records in
> 80+0 records out
> 83886080 bytes (84 MB) copied, 1,34145 s, 62,5 MB/s
> [root@gluster-client-1 gluster_vol]# dd if=/mnt/gluster_vol/80 of=/dev/null
> bs=1M
> 80+0 records in
> 80+0 records out
> 83886080 bytes (84 MB) copied, 0,0246918 s, 3,4 GB/s
> [root@gluster-client-1 gluster_vol]# dd if=/mnt/gluster_vol/80 of=/dev/null
> bs=1M
> 80+0 records in
> 80+0 records out
> 83886080 bytes (84 MB) copied, 0,0195678 s, 4,3 GB/s
> [root@gluster-client-1 gluster_vol]#
You are seeing the effect of client side kernel caching. If you want
to see the actual throughput for reads run:
sync; echo 3 > /proc/sys/vm/drop_caches; dd blah
Kernel caching happens on both the client and server side, when I want
to see uncached performance I drop caches on both clients and servers:
run_drop_cache()
{
for host in $MASTERNODE $NODE $CLIENT
do
ssh -i /root/.ssh/my_id root@${host} echo "Dropping
cache on $host"
ssh -i /root/.ssh/my_id root@${host} sync
ssh -i /root/.ssh/my_id root@${host} "echo
3 > /proc/sys/vm/drop_caches"
done
}
HTH!
-b
> Cache is working flawlessly, (even though that 80 Mbytes is smaller
than the
> min-file-size value, but I don't care about it right now) what I don't
get
> is where cache is being stored. Is it stored on the client side or
on the
> server side? According to documentation, the io-cache translator could
be
> loaded on both sides (client and server), how can I know where it
is being
> loeaded? It looks like as it was being stored locally because of the
speed,
> but I'd like to be sure.
>
> Thanks!
>
> --
> Pavlik Salles Juan José
> Blog - http://viviendolared.blogspot.com
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users
=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
|