Question about compile performance over GlusterFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have been testing out my GlusterFS setup.  I have been
very happy with the streaming IO performance and scalability.
We have some users on the system now and they are seeing
very good performance (fast and consistent) as compared
to our other filesystem.

I have a test that I created that tries to measure metadata
performance by building the linux kernel.  What I have
found is that GlusterFS is slower than local disk, NFS,
and Panasas.  The compile time on those three systems
is roughly 500 seconds.  For GlusterFS (1.3.7), the
compile time is roughly 1200 seconds.  My GlusterFS filesystem
is using ramdisks on the servers and communicating using
IB-Verbs.  My server and client configs are below.

Note I did not implement both write-behind and not read-behind
based on some benchmarks I saw on the list on how it affects
re-write.

So, is this just because mmap isn't (yet) supported in FUSE?
Or, is there something else I should be looking at.

Thanks,
Craig


server.cfg
----------

volume brick
  type storage/posix                   # POSIX FS translator
  option directory /tmp/scratch/export        # Export this directory
end-volume

volume server
  type protocol/server
  subvolumes brick
  option transport-type ib-sdp/server     # For TCP/IP transport
  option auth.ip.brick.allow *
end-volume

client.cfgvolume client-ns
  type protocol/client
  option transport-type ib-sdp/client
  option remote-host w8-ib0
  option remote-subvolume brick-ns
end-volume



volume client-w8
  type protocol/client
  option transport-type ib-sdp/client
  option remote-host w8-ib0
  option remote-subvolume brick
end-volume

volume unify
        type cluster/unify
        subvolumes  client-w8
        option namespace client-ns
        option scheduler rr
end-volume

volume iot
        type performance/io-threads
        subvolumes unify
        option thread-count 4
end-volume

volume wb
        type performance/write-behind
        subvolumes iot
end-volume

volume ioc
        type performance/io-cache
        subvolumes wb
end-volume

----------




--
Craig Tierney (craig.tierney@xxxxxxxx)




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux