Any mature(better) solution(way) to handle slow performance on 'ls -l, '.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi experts on glusterFS,

In our testbed, we found that the ' ls -l' performance is pretty slow.
Indeed from the prospect of glusterFS design space, we need to avoid
'ls ' directory which will traverse all bricks sequentially in our
current knowledge.

We use generic setting for our testbed:

```
Volume Name: gv0
Type: Distributed-Replicate
Volume ID: 4a6f96f8-b3fb-4550-bd19-e1a5dffad4d0
Status: Started
Snapshot Count: 0
Number of Bricks: 19 x 3 = 57
Transport-type: tcp
Bricks:
...
Options Reconfigured:
features.inode-quota: off
features.quota: off
cluster.quorum-reads: on
cluster.quorum-count: 2
cluster.quorum-type: fixed
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.server-quorum-ratio: 51%

```

Carefully consulting docs, the NFS client is preferred client solution
for better 'ls' performance. However, this better performance comes
from caching meta info locally, I think, and the caching mechanism
will cause the penalty  of data coherence, right?

I want to know what's the best or mature way to trade-off the 'ls '
performance with data coherence in in reality? Any comments are
welcome.

Thanks.

-Fei
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-devel



[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux