Re: accessing glusterfs mounted share - really slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Matthias,
 have you loaded io-threads on the server bricks? io-threads is meant to
classify file I/O and metadata operations into seperate threads, thus,
readdir() operations would not fall-in-line with ongoing writes, instead
gets into a different queue and processed by a seperate thread. this should
make ls more 'interactive' while disk I/O is happening.

avati

2007/8/31, Matthias Albert <gluster@xxxxxxxxxxxxxxxx>:
>
> Hi Krishna,
>
>
> Krishna Srinivas schrieb:
> > Hi Matthias,
> >
> > If I understand correctly, for you all the operations are fine, but
> > when a "cp" is being done and simultaneously you do "ls" from
> > another client, the "ls" is slow?
> >
> yepp, absolutly correct. Only If I do a "cp or dd for example" the ls or
> tab completion is really slow and only in the glusterfs mounted share.
>
> Matthias
>
> > Krishna
> >
> > On 8/31/07, Matthias Albert <gluster@xxxxxxxxxxxxxxxx> wrote:
> >
> >> Hi all,
> >>
> >> first of all, I've to say that gluterfs is really cool and absolutly
> >> great. I'm not a cluster filesystem specialist but I tested/configured
> >> openafs and lustre and both of them are so huge and complicated.
> >> As I saw glusterfs and played a little bit with it, I was really
> >> surprised how easy it is to setup a cluster filesystem without extra
> >> acl's, without formatting the new filesystem without a
> >> metadata/objectserver :-). Thanks a lot for this.
> >>
> >> Of course I've some questions :-).
> >>
> >> I've setup 4 glusterfsd server, each of them with a storage of about
> >> 400-500 Gig pre-tax.
> >> On client side  I made different afr's over my remote volumes and
> >> finally a unify over the afr's. Readahead and writebehind is also
> enabled.
> >>
> >> Everything is working fine. I can copy "tons" of Gigabytes in my
> >> glusterfs without any problms and also my performance is absolutly
> great.
> >>
> >> But every time I start a "cp" or do a "dd test (to write some testfiles
> >> in the gluster storage) on some of my clients (I've 3 glusterfs clients
> >> one of them is a bacula server which uses the glusterfs as storage)
> >> all access from my glusterfs clients to the mounted share is really
> >> slow. It takes sometimes about 3-4 seconds till my ls is printing the
> >> output of the directory.
> >>
> >> e.g.
> >> ---snip---
> >> bash# df -h
> >> glusterfs             892G   84G  809G  10% /backup
> >>
> >> gsx:/backup/vmware-images # time ll
> >> ...
> >> ...
> >> real    0m2.863s
> >> user    0m0.004s
> >> sys     0m0.005s
> >> gsx:/backup/vmware-images #
> >> ---snap---
> >>
> >> Also the "tab completion" in the mounted glusterfs share is really
> slow.
> >> Access of not mounted glusterfs share is just normal (accessing /etc
> >> /usr/ /root etc. )
> >>
> >> Does anyone know these "phenomenon"?
> >>
> >> I'm using Debian as distro for all of my servers and Debian and SuSE on
> >> Client side.
> >>
> >> glusterfs version: glusterfs--mainline--2.5 patch-459
> >> fuse: fuse-2.7.0-glfs3
> >>
> >> If needed I can post my configs, strace outputs of ls -la and so on.
> >>
> >> Regards,
> >>
> >>   Matthias
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> _______________________________________________
> >> Gluster-devel mailing list
> >> Gluster-devel@xxxxxxxxxx
> >> http://lists.nongnu.org/mailman/listinfo/gluster-devel
> >>
> >>
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxx
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>



-- 
It always takes longer than you expect, even when you take into account
Hofstadter's Law.

-- Hofstadter's Law


[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux