Re: accessing glusterfs mounted share - really slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Anand Avati schrieb:
Matthias,
have you loaded io-threads on the server bricks? io-threads is meant to classify file I/O and metadata operations into seperate threads, thus, readdir() operations would not fall-in-line with ongoing writes, instead gets into a different queue and processed by a seperate thread. this should make ls more 'interactive' while disk I/O is happening.

on server side I only do a export of my volumes

#server-side
---snip---
...
volume sdb1
type storage/posix option directory /export/sdb1
end-volume

volume sdc1
type storage/posix option directory /export/sdc1
end-volume

volume server
       type protocol/server
       option transport-type tcp/server
       option listen-port 6997
       subvolumes sdb1 sdc1
       option auth.ip.sdb1.allow *
       option auth.ip.sdc1.allow *
end-volume
...
---snap---

and on client side I added the afr, unify ,writebehind and readahead translator.

#client-side
---snip---
...
volume gluster1-sdb1
 type protocol/client
 option transport-type tcp/client
 option remote-host gluster1
 option remote-port 6997
 option remote-subvolume sdb1
end-volume

volume gluster1-sdc1
 type protocol/client
 option transport-type tcp/client
 option remote-host gluster1
 option remote-port 6997
 option remote-subvolume sdc1
end-volume
...
volume afr1
 type cluster/afr
 subvolumes gluster3-hdb1 gluster4-hdc1
 option replicate *:2
end-volume
...
volume cluster
 type cluster/unify
 subvolumes afr1 afr2 afr3 afr4 gluster2-hdb1
 option scheduler alu
 option alu.limits.min-free-disk  6GB
 option alu.limits.max-open-files 10000
 option namespace brick
option alu.order disk-usage:read-usage:write-usage:open-files-usage:disk-speed-usage
 option alu.disk-usage.entry-threshold 100GB
 option alu.disk-usage.exit-threshold  60MB
 option alu.open-files-usage.entry-threshold 1024
 option alu.open-files-usage.exit-threshold 32
 option alu.stat-refresh.interval 10sec
end-volume

volume writebehind
 type performance/write-behind
 option aggregate-size 131072 # unit in bytes
 option flush-behind off
 subvolumes cluster
end-volume

volume readahead
 type performance/read-ahead
option page-size 65536 option page-count 16
 subvolumes writebehind
end-volume
---snap---

So I guess, I should add a readahead and writebehind translator on server side?

Regards,

 Matthias

avati

2007/8/31, Matthias Albert <gluster@xxxxxxxxxxxxxxxx <mailto:gluster@xxxxxxxxxxxxxxxx>>:

    Hi Krishna,


    Krishna Srinivas schrieb:
    > Hi Matthias,
    >
    > If I understand correctly, for you all the operations are fine, but
    > when a "cp" is being done and simultaneously you do "ls" from
    > another client, the "ls" is slow?
    >
    yepp, absolutly correct. Only If I do a "cp or dd for example" the
    ls or
    tab completion is really slow and only in the glusterfs mounted share.

    Matthias

    > Krishna
    >
    > On 8/31/07, Matthias Albert <gluster@xxxxxxxxxxxxxxxx
    <mailto:gluster@xxxxxxxxxxxxxxxx>> wrote:
    >
    >> Hi all,
    >>
    >> first of all, I've to say that gluterfs is really cool and
    absolutly
    >> great. I'm not a cluster filesystem specialist but I
    tested/configured
    >> openafs and lustre and both of them are so huge and complicated.
    >> As I saw glusterfs and played a little bit with it, I was really
    >> surprised how easy it is to setup a cluster filesystem without
    extra
    >> acl's, without formatting the new filesystem without a
    >> metadata/objectserver :-). Thanks a lot for this.
    >>
    >> Of course I've some questions :-).
    >>
    >> I've setup 4 glusterfsd server, each of them with a storage of
    about
    >> 400-500 Gig pre-tax.
    >> On client side  I made different afr's over my remote volumes and
    >> finally a unify over the afr's. Readahead and writebehind is
    also enabled.
    >>
    >> Everything is working fine. I can copy "tons" of Gigabytes in my
    >> glusterfs without any problms and also my performance is
    absolutly great.
    >>
    >> But every time I start a "cp" or do a "dd test (to write some
    testfiles
    >> in the gluster storage) on some of my clients (I've 3 glusterfs
    clients
    >> one of them is a bacula server which uses the glusterfs as
    storage)
    >> all access from my glusterfs clients to the mounted share is really
    >> slow. It takes sometimes about 3-4 seconds till my ls is
    printing the
    >> output of the directory.
    >>
    >> e.g.
    >> ---snip---
    >> bash# df -h
    >> glusterfs             892G   84G  809G  10% /backup
    >>
    >> gsx:/backup/vmware-images # time ll
    >> ...
    >> ...
    >> real     0m2.863s
    >> user    0m0.004s
    >> sys     0m0.005s
    >> gsx:/backup/vmware-images #
    >> ---snap---
    >>
    >> Also the "tab completion" in the mounted glusterfs share is
    really slow.
    >> Access of not mounted glusterfs share is just normal (accessing
    /etc
    >> /usr/ /root etc. )
    >>
    >> Does anyone know these "phenomenon"?
    >>
    >> I'm using Debian as distro for all of my servers and Debian and
    SuSE on
    >> Client side.
    >>
    >> glusterfs version: glusterfs--mainline--2.5 patch-459
    >> fuse: fuse-2.7.0-glfs3
    >>
    >> If needed I can post my configs, strace outputs of ls -la and
    so on.
    >>
    >> Regards,
    >>
    >>   Matthias
    >>
    >>
    >>
    >>
    >>
    >>
    >>
    >>
    >>
    >>
    >>
    >>
    >> _______________________________________________
    >> Gluster-devel mailing list
    >> Gluster-devel@xxxxxxxxxx <mailto:Gluster-devel@xxxxxxxxxx>
    >> http://lists.nongnu.org/mailman/listinfo/gluster-devel
    <http://lists.nongnu.org/mailman/listinfo/gluster-devel>
    >>
    >>



    _______________________________________________
    Gluster-devel mailing list
    Gluster-devel@xxxxxxxxxx <mailto:Gluster-devel@xxxxxxxxxx>
    http://lists.nongnu.org/mailman/listinfo/gluster-devel




--
It always takes longer than you expect, even when you take into account Hofstadter's Law.

-- Hofstadter's Law





[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux