Re: Performance in Xen HVM using loop/img based disk file

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Richard!
Unfortunately, to use Xen with "tap:aio" to mount disk image, requirement
with GlusterFS is use --disable-direct-io-mode, because otherwise Xen return an error on device.

Worst aspect with use disable direct io mode are that read and write forgiveness a 50% of performance.
(max write in new file 101Mb/s and in rewrite 67,8Mb/s)
It's a very disaster, although connection from servers are gived by Infiniband (10x)

I does not have idea if is possible, in a future, solve this aspect (must mount with disable direct io mode to use with Xen) but, for me, at this moment GlusterFS are a very very good filesystem, but is not an optimal
choice to use for Xen storage.

Enrico

Have you tried with io-threads on the server side?

Avati

2009/2/21 Richard Williams <richard@xxxxxxxxxxxxx>:
Right now I’m testing things out, trying to get Xen going using gluster as
the storage back end.

I have a pretty simple setup at the moment.



1 server running Xen 3.3.1/Debian Etch(gluster client)

1 server running openfiler 2.3 (gluster server)



Networking on gluster client:

2 gigabit nics in bond0 (mode4)

bond0 is bridged with xenbr0

Xen adds virtual interfaces to the xenbr0 bridge for the virtual machines..

Basically all externally bound traffic goes through xenbr0, then bond0 which
is then balanced over eth0 and eth1



Networking on gluster server:

2 gigabit nics in bond0 (mode4)

All external traffic goes through bond0



Gluster version on server and client:

# glusterfs --version

glusterfs 2.0.0rc1 built on Feb 17 2009 10:28:23

Repository revision: glusterfs--mainline--3.0--patch-844



Fuse on gluster client:

I believe I’m using the Gluster patched fuse, but I’m not sure.

When I tried to compile fuse, it said that the module was in the kernel.

So I tried to compile gluster, and it didn’t see fuse.

So I tried to modprobe fuse but it wasn’t there.

Then compiled fuse using  the –enable-kernel-module option to force fuse to
compile.

Then I compiled gluster.



Fuse on gluster server:

I think I’m using the fuse that came with Openfiler 2.3

(again, not sure how to check at this point)



Ok, so basically everything is generally working great as far as I can tell
- no errors in logs, etc.

I’m running windows 2008 on Xen and all is functioning… except one thing.



Whenever I put a decent bit of Disk I/O on the virtual machine.. such as
downloading a large file from the internet…

Then the virtual machine seems to hang 2-5 seconds, continues, then hangs,
then continues, then hangs… as it’s downloading the file.

The whole VM doesn’t hang, however… any operation that does not require disk
access will continue to run smoothly.



If I move the VM disk off of the gluster mount onto local storage, then
everything runs fine.  Downloading a file runs quite smoothly then.



So I think it’s gluster/fuse/networking related (maybe) but I don’t know how
to figure it out from here.

I’ve played with various performance translators and also disabling all
performance translators.

All variations do not seem to have an impact on this issue at all.

I suspect it’s something deeper or perhaps I’m just missing something with
the performance translators...

Anyway, I will be willing to try any and all suggestions.

I would really love to have gluster working – it’s the most innovative
solution to storage that I’ve found for what I’d like to do.



Thanks!!



Richard Williams



My .vol files:



# file: /etc/glusterfs/glusterfs-client.vol

volume remote

  type protocol/client

  option transport-type tcp

  option remote-host 64.16.220.101  # can be IP or hostname

  option remote-subvolume tempvm-brick

end-volume



volume writebehind

  type performance/write-behind

  option block-size 128KB

  subvolumes remote

end-volume





# file: /usr/local/etc/glusterfs/glusterfs-server.vol

volume posix-tempvm-brick

 type storage/posix

 option directory /mnt/nasblock1vg/nasblock1ext3/tempvm-brick

end-volume



volume locks-tempvm-brick

 type features/locks

 subvolumes posix-tempvm-brick

end-volume



volume tempvm-brick

 type performance/io-threads

 option thread-count 8

 subvolumes locks-tempvm-brick

end-volume



volume server

 type protocol/server

 option transport-type tcp

option auth.addr.tempvm-brick.allow <masked ip addresses>

subvolumes tempvm-brick

end-volume



_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel




_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel







[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux