Re: optimising DLM speed?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On Wed, 2011-02-16 at 19:41 +0000, Alan Brown wrote:
> > Directories of the size (number of entries) which you have indicated
> should not be causing a problem as lookup should still be quite fast at
> that scale.
> 
> Perhaps, but even so 4000 file directories usually take over a minute to 
> "ls -l" , while 85k file/directories take 5 mins (20-40 mins on a bad 
> day) - and this is mounted lock_dlm, single-node-only
> 
> 
Yes, ls -l will always take longer because it is not just accessing the
directory, but also every inode in the directory. As a result the I/O
pattern will generally be poor.

Also, the order in which GFS2 returns the directory entries is not
efficient if it is used for doing the stat calls associated with the ls
-l. Better performance could be obtained by sorting the inodes to run
stat on into inode number order.

The reason that the ordering is not ideal is that without that we could
not maintain a uniform view of the directory from a readers point of
view while other processes are adding or removing entries. It is a
historical issue that we have inherited from GFS and I've spent some
time trying to come up with a solution in kernel space, but in the end,
a userland solution may be a better way to solve it.

I assume that once the directory has been read in once, that it acesses
will be much faster on subsequent occasions,

Steve.


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux