Re: about glusterfs--mainline--3.0--patch-717

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



No,
When i run glusterfs with --debug option, nothing appear.
I just see that glusferfsd process consume a lot of cpu, and that's on big file, ( 10G ) acces by a lot of client simultanous, glusterfs/glusterfsd ? seems to be not respond, data not send like a deadlock or loop.

Regards,
Nicolas Prochazka

2008/12/9 Anand Avati <avati@xxxxxxxxxxxxx>
Nicolas,
Do you have logs from the client and server?

avati

2008/12/9 nicolas prochazka <prochazka.nicolas@xxxxxxxxx>:
> hi again,
> about glusterfs--mainline--3.0--patch-727 with same configuration.
> Now glusterfsd seems to be take a lot of cpu ressource > 20 %  , and ls -l
> /glustermount/  is very very long to respond ( > 5 minutes ).
> We can notice that with -719  the issue is not appearing.
>
> Nicolas Prochazka.
>
> 2008/12/8 nicolas prochazka <prochazka.nicolas@xxxxxxxxx>
>>
>> Thanks it's working now.
>> Regards,
>> Nicolas Prochazka
>>
>> 2008/12/8 Basavanagowda Kanur <gowda@xxxxxxxxxxxxx>
>>>
>>> Nicolas,
>>>   Please use glusterfs--mainline--3.0--patch-719.
>>>
>>> --
>>> gowda
>>>
>>> On Mon, Dec 8, 2008 at 3:07 PM, nicolas prochazka
>>> <prochazka.nicolas@xxxxxxxxx> wrote:
>>>>
>>>> Hi,
>>>> It seems that  glusterfs--mainline--3.0--patch-717  has a new problem,
>>>> which not appear at least witch  glusterfs--mainline--3.0--patch-710
>>>> Now i've :
>>>> ls: cannot open directory /mnt/vdisk/: Software caused connection abort
>>>>
>>>> Regards,
>>>> Nicolas Prochazka.
>>>>
>>>> my client spec file  :
>>>> volume brick1
>>>> type protocol/client
>>>> option transport-type tcp/client # for TCP/IP transport
>>>> option remote-host 10.98.98.1   # IP address of server1
>>>> option remote-subvolume brick    # name of the remote volume on server1
>>>> end-volume
>>>>
>>>> volume brick2
>>>> type protocol/client
>>>> option transport-type tcp/client # for TCP/IP transport
>>>> option remote-host 10.98.98.2   # IP address of server2
>>>> option remote-subvolume brick    # name of the remote volume on server2
>>>> end-volume
>>>>
>>>> volume afr
>>>> type cluster/afr
>>>> subvolumes brick1 brick2
>>>> end-volume
>>>>
>>>> volume iothreads
>>>> type performance/io-threads
>>>> option thread-count 4
>>>> option cache-size 32MB
>>>> subvolumes afr
>>>> end-volume
>>>>
>>>> volume io-cache
>>>> type performance/io-cache
>>>> option cache-size 256MB             # default is 32MB
>>>> option page-size  1MB              #128KB is default option
>>>> option force-revalidate-timeout 2  # default is 1
>>>> subvolumes iothreads
>>>> end-volume
>>>>
>>>> my server spec-file
>>>> volume brickless
>>>> type storage/posix
>>>> option directory /mnt/disks/export
>>>> end-volume
>>>>
>>>> volume brick
>>>> type features/posix-locks
>>>> option mandatory on          # enables mandatory locking on all files
>>>> subvolumes brickless
>>>> end-volume
>>>>
>>>> volume server
>>>> type protocol/server
>>>> subvolumes brick
>>>> option transport-type tcp
>>>> option auth.addr.brick.allow 10.98.98.*
>>>> end-volume
>>>>
>>>>
>>>> _______________________________________________
>>>> Gluster-devel mailing list
>>>> Gluster-devel@xxxxxxxxxx
>>>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>>>
>>>
>>>
>>>
>>> --
>>> hard work often pays off after time, but laziness always pays off now
>>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxx
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
>


[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux