Re: Doing LS with a lot of directory, files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Tests showed better performance in copying with jumbo frames, both
directly and in glusterfs. 

When people are working in ssh tunnels (and here they do that all the
time), nis, ntp, the namespace in glusterfs (wich is zero byte files)
and so on, you get traffic with small packets all the time. That's a
waste of resources - small packets in jumbo frames. But all those small
packets are disturbing the flow when copying files from storage to
computing node. You don't see this before you have load on the system,
but the more load the better use of it you get. 

I had to use two separat switches (cisco 3750) because packet size is a
global variable on hardware, you can't have different size even though
you can have different vlan's on the same switch. At the same time the
switch get's tied up to the packet size, so if you have many small
packets you really waste much of the hardware.

About the mtu: The computer have some work to do regarding splitting the
file into packets, crc checking, reassembling the packets to files
again, control traffic if the packets has arrived and so on. One jumbo
frame holds 6 1500 frames - so you do 1/6 of the job under perfect
circumstances. Moving 150GB you can really feel the difference. This
affects the load on the computers involved.

Regards,
Einar

On Thu, 2008-04-24 at 12:55 +0100, gordan@xxxxxxxxxx wrote:
> Can you explain what the principal change was that made it go much faster? 
> Splitting up GlusterFS and non-GlusterFS traffic? Increasing the MTU? I 
> cannot see either of those making that big a difference...
> 
> Gordan
> 
> On Thu, 24 Apr 2008, Einar Gautun wrote:
> 
> > Hello,
> > I've set up a split network, with one switch(Gbit) having the official
> > ip for ssh, ntp, nis and so on - mtu 1500. Then I have another
> > switch(Gbit) with 10.0.0.0/24 - mtu 9000 - for the file transport. Here
> > I use trunking. This gave so much better responce with ls, so now I have
> > 3029 files in a directory and ls -l takes only 2-3 seconds(test setup).
> >
> > Even under load this setup works just so much better, going from ls -l
> > with up to 1 min responce to 1-2 seconds under same load and same
> > equipment(running system). I have ext3 as filesystem, and io-threads
> > only on server, unify on client, no other translators serverside or
> > clientside.
> >
> > Regards,
> >
> > Einar
> >
> > On Thu, 2008-04-24 at 12:47 +0200, Tom Myny wrote:
> >> Hello,
> >>
> >> I'm running afr on two storage servers, with three clients.
> >> For the moment, we have copied over 500 million small files on it, splitting
> >> into each directory which contains 1000 files.
> >>
> >> When doing ls in directory containing 1000 directory's we have the following
> >> issue:
> >>
> >>
> >> - Ls is taking more then 15 minutes to complete in a directory with 1000
> >> folders. (this will be split also to 100 folders later, but it's now a big
> >> problem)
> >> 	-> Yes, for now its ls --color=auto by default on debian :D
> >> - When doing copies from other clients, those copies halt until that ls is
> >> complete.
> >>
> >>
> >> Is there a way to
> >>
> >> 1) Do a ls faster (ok, I know it can be that fast like on the filesystem
> >> itself, but on the filesystem (or an nfs system) it takes max 15 seconds)
> >> 2) When someone is doing an ls, the other processes are not freesing.
> >> (checking on the storage servers, we have a load of 0.00)
> >>
> >> The filesystems we use are based on xfs.
> >> An example of a server config:
> >>
> >> volume sas-ds
> >>         type storage/posix
> >>         option directory /sas/data
> >> end-volume
> >>
> >> volume sas-ns
> >>         type storage/posix
> >>         option directory /sas/ns
> >> end-volume
> >>
> >> volume sata-ds
> >>         type storage/posix
> >>         option directory /sata/data
> >> end-volume
> >>
> >> volume sata-ns
> >>         type storage/posix
> >>         option directory /sata/ns
> >> end-volume
> >>
> >> volume sas-backup-ds
> >>         type protocol/client
> >>         option transport-type tcp/client
> >>         option remote-host x.x.x.x
> >>         option remote-subvolume sas-ds
> >> end-volume
> >>
> >> volume sas-backup-ns
> >>         type protocol/client
> >>         option transport-type tcp/client
> >>         option remote-host x.x.x.x
> >>         option remote-subvolume sas-ns
> >> end-volume
> >>
> >> ...
> >>
> >> volume sas-unify
> >>         type cluster/unify
> >>         subvolumes sas-ds-afr
> >>         option namespace sas-ns-afr
> >>         option scheduler rr
> >> end-volume
> >>
> >> volume sata-unify
> >>         type cluster/unify
> >>         subvolumes sata-ds-afr
> >>         option namespace sata-ns-afr
> >>         option scheduler rr
> >> end-volume
> >>
> >> volume sas
> >>         type performance/io-threads
> >>         option thread-count 16
> >>         option cache-size 256MB
> >>         subvolumes sas-unify
> >> end-volume
> >>
> >> volume sata
> >>         type performance/io-threads
> >>         option thread-count 16
> >>         option cache-size 256MB
> >>         subvolumes sata-unify
> >> end-volume
> >>
> >> ..
> >>
> >> I hope to fix this, because we want to double this next year :)
> >>
> >>
> >> Regards,
> >> Tom
> >>
> >>
> >>
> >> _______________________________________________
> >> Gluster-devel mailing list
> >> Gluster-devel@xxxxxxxxxx
> >> http://lists.nongnu.org/mailman/listinfo/gluster-devel
> > -- 
> > Einar Gautun                     einar.gautun@xxxxxxxxxxx
> >
> > Statens kartverk            | Norwegian Mapping Authority
> > 3507 Hønefoss               |    NO-3507 Hønefoss, Norway
> >
> > Ph +47 32118372   Fax +47 32118101       Mob +47 92692662
> >
> >
> > _______________________________________________
> > Gluster-devel mailing list
> > Gluster-devel@xxxxxxxxxx
> > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> >
> _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxx http://lists.nongnu.org/mailman/listinfo/gluster-devel
-- 
Einar Gautun                     einar.gautun@xxxxxxxxxxx

Statens kartverk            | Norwegian Mapping Authority
3507 Hønefoss               |    NO-3507 Hønefoss, Norway

Ph +47 32118372   Fax +47 32118101       Mob +47 92692662




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux