NFS performance degradation in 3.3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've just make more tests and without any error log, the NFS glusterfs
server raised up to 6.00 load (in a 4 core server) and in the 2 bricks
where the "real" files where stored, reached loads of 10. No error message
in log files (nfs, bricks, gluster).

Will deactivate NLM improve performance? Any other options?

Thanks in advance for any hint,
Samuel.

On 19 July 2012 08:44, samuel <samu60 at gmail.com> wrote:

> This are the parameters that are set:
>
>  59: volume nfs-server
>  60:     type nfs/server
>  61:     option nfs.dynamic-volumes on
>  62:     option nfs.nlm on
>  63:     option rpc-auth.addr.cloud.allow *
>  64:     option nfs3.cloud.volume-id 84fcec8c-d11a-43b6-9689-3f39700732b3
>  65:     option nfs.enable-ino32 off
>  66:     option nfs3.cloud.volume-access read-write
>  67:     option nfs.cloud.disable off
>  68:     subvolumes cloud
>  69: end-volume
>
> And some errors are:
> [2012-07-18 17:57:00.391104] W [socket.c:195:__socket_rwv]
> 0-socket.nfs-server: readv failed (Connection reset by peer)
> [2012-07-18 17:57:29.805684] W [socket.c:195:__socket_rwv]
> 0-socket.nfs-server: readv failed (Connection reset by peer)
> [2012-07-18 18:04:08.603822] W [nfs3.c:3525:nfs3svc_rmdir_cbk] 0-nfs:
> d037df6: /one/var/datastores/0/99/disk.0 => -1 (Directory not empty)
> [2012-07-18 18:04:08.625753] W [nfs3.c:3525:nfs3svc_rmdir_cbk] 0-nfs:
> d037dfe: /one/var/datastores/0/99 => -1 (Directory not empty)
>
> The directory not empty is just an attempt to delete a directory with
> files inside but I guess that it should not increase the CPU load.
>
> Above case is just one of the many times that the NFS daemon started using
> CPU but it's not the only scenario (deleting not empyt directory) that
> causes the degradation. Sometimes it has happened wihout any concrete error
> on the log files. I'll try to make more tests and offer more debug
> information.
>
> Thanks for your answer so far,
> Samuel.
>
>
> On 18 July 2012 21:54, Anand Avati <anand.avati at gmail.com> wrote:
>
>> Is there anything in the nfs logs?
>>
>> Avati
>>
>> On Wed, Jul 18, 2012 at 9:44 AM, samuel <samu60 at gmail.com> wrote:
>>
>>> Hi all,
>>>
>>> We're experiencing with a 4 nodes distributed-replicated environment
>>> (replica 2). We were using gluster native client to access the volumes, but
>>> we were asked to add NFS accessibility to the volume. We then started the
>>> NFS daemon on the bricks. Everything went ok but we started experiencing
>>> some performance degradation accessing the volume.
>>> We debugged the problem and found out that quite often the NFS glusterfs
>>> process (NOT the glusterfsd) eats up all the CPU and the server where the
>>> NFS is being exported starts offering really bad performance.
>>>
>>> Is there any issue with 3.3 and NFS performance? Are there any NFS
>>> parameters to play with that can mitigate this degradation (standard R/W
>>> values drops to a quarter of standard values)?
>>>
>>> Thanks in advance for any help,
>>>
>>> Samuel.
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gluster.org/pipermail/gluster-users/attachments/20120719/6d964032/attachment-0001.htm>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux