Re: Any mature(better) solution(way) to handle slow performance on 'ls -l, '.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for you kind reply, which is very helpful for me, I use 3.11,

Thanks a lot.  :-)

- Fei
On Thu, Jun 7, 2018 at 6:59 PM Poornima Gurusiddaiah
<pgurusid@xxxxxxxxxx> wrote:
>
> If you are not using applications that rely on 100% metadata consistency, like Databases, Kafka, AMQ etc. you can use the below mentioned volume options:
>
> # gluster volume set <volname> group metadata-cache
>
> # gluster volume set <volname> network.inode-lru-limit 200000
>
> # gluster volume set <VOLNAME> performance.readdir-ahead on
>
> # gluster volume set <VOLNAME> performance.parallel-readdir on
>
> For more information refer to [1]
>
> Also, which version og Gluster are you using? Its preferred to use 3.11 or above for these perf enhancements.
> Note that parallel-readdir is going to help in increasing the ls -l performance drastically in your case, but there are few corner case known issues.
>
> Regards,
> Poornima
>
> [1] https://github.com/gluster/glusterdocs/pull/342/files#diff-62f536ad33b2c2210d023b0cffec2c64
>
> On Wed, May 30, 2018, 8:29 PM Yanfei Wang <backyes@xxxxxxxxx> wrote:
>>
>> Hi experts on glusterFS,
>>
>> In our testbed, we found that the ' ls -l' performance is pretty slow.
>> Indeed from the prospect of glusterFS design space, we need to avoid
>> 'ls ' directory which will traverse all bricks sequentially in our
>> current knowledge.
>>
>> We use generic setting for our testbed:
>>
>> ```
>> Volume Name: gv0
>> Type: Distributed-Replicate
>> Volume ID: 4a6f96f8-b3fb-4550-bd19-e1a5dffad4d0
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 19 x 3 = 57
>> Transport-type: tcp
>> Bricks:
>> ...
>> Options Reconfigured:
>> features.inode-quota: off
>> features.quota: off
>> cluster.quorum-reads: on
>> cluster.quorum-count: 2
>> cluster.quorum-type: fixed
>> transport.address-family: inet
>> nfs.disable: on
>> performance.client-io-threads: off
>> cluster.server-quorum-ratio: 51%
>>
>> ```
>
>
>>
>> Carefully consulting docs, the NFS client is preferred client solution
>> for better 'ls' performance. However, this better performance comes
>> from caching meta info locally, I think, and the caching mechanism
>> will cause the penalty  of data coherence, right?
>>
>> I want to know what's the best or mature way to trade-off the 'ls '
>> performance with data coherence in in reality? Any comments are
>> welcome.
>>
>> Thanks.
>>
>> -Fei
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel@xxxxxxxxxxx
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-devel



[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux