Re: Doing LS with a lot of directory, files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Can you explain what the principal change was that made it go much faster? Splitting up GlusterFS and non-GlusterFS traffic? Increasing the MTU? I cannot see either of those making that big a difference...

Gordan

On Thu, 24 Apr 2008, Einar Gautun wrote:

Hello,
I've set up a split network, with one switch(Gbit) having the official
ip for ssh, ntp, nis and so on - mtu 1500. Then I have another
switch(Gbit) with 10.0.0.0/24 - mtu 9000 - for the file transport. Here
I use trunking. This gave so much better responce with ls, so now I have
3029 files in a directory and ls -l takes only 2-3 seconds(test setup).

Even under load this setup works just so much better, going from ls -l
with up to 1 min responce to 1-2 seconds under same load and same
equipment(running system). I have ext3 as filesystem, and io-threads
only on server, unify on client, no other translators serverside or
clientside.

Regards,

Einar

On Thu, 2008-04-24 at 12:47 +0200, Tom Myny wrote:
Hello,

I'm running afr on two storage servers, with three clients.
For the moment, we have copied over 500 million small files on it, splitting
into each directory which contains 1000 files.

When doing ls in directory containing 1000 directory's we have the following
issue:


- Ls is taking more then 15 minutes to complete in a directory with 1000
folders. (this will be split also to 100 folders later, but it's now a big
problem)
	-> Yes, for now its ls --color=auto by default on debian :D
- When doing copies from other clients, those copies halt until that ls is
complete.


Is there a way to

1) Do a ls faster (ok, I know it can be that fast like on the filesystem
itself, but on the filesystem (or an nfs system) it takes max 15 seconds)
2) When someone is doing an ls, the other processes are not freesing.
(checking on the storage servers, we have a load of 0.00)

The filesystems we use are based on xfs.
An example of a server config:

volume sas-ds
        type storage/posix
        option directory /sas/data
end-volume

volume sas-ns
        type storage/posix
        option directory /sas/ns
end-volume

volume sata-ds
        type storage/posix
        option directory /sata/data
end-volume

volume sata-ns
        type storage/posix
        option directory /sata/ns
end-volume

volume sas-backup-ds
        type protocol/client
        option transport-type tcp/client
        option remote-host x.x.x.x
        option remote-subvolume sas-ds
end-volume

volume sas-backup-ns
        type protocol/client
        option transport-type tcp/client
        option remote-host x.x.x.x
        option remote-subvolume sas-ns
end-volume

...

volume sas-unify
        type cluster/unify
        subvolumes sas-ds-afr
        option namespace sas-ns-afr
        option scheduler rr
end-volume

volume sata-unify
        type cluster/unify
        subvolumes sata-ds-afr
        option namespace sata-ns-afr
        option scheduler rr
end-volume

volume sas
        type performance/io-threads
        option thread-count 16
        option cache-size 256MB
        subvolumes sas-unify
end-volume

volume sata
        type performance/io-threads
        option thread-count 16
        option cache-size 256MB
        subvolumes sata-unify
end-volume

..

I hope to fix this, because we want to double this next year :)


Regards,
Tom



_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel
--
Einar Gautun                     einar.gautun@xxxxxxxxxxx

Statens kartverk            | Norwegian Mapping Authority
3507 Hønefoss               |    NO-3507 Hønefoss, Norway

Ph +47 32118372   Fax +47 32118101       Mob +47 92692662


_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux