performance issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You can try this:

# /etc/glusterfs/glusterfsd.vol
volume server
 type protocol/server
 option transport-type tcp/server
* option transport.socket.nodelay on*
 option auth.addr.brick.allow *
 subvolumes brick
end-volume

and on each brick:

#/etc/glusterfs/glusterfs.vol

volume brick1
 type protocol/client
 option transport-type tcp
 *option transport.socket.nodelay on*
 option remote-host       # IP address of the remote brick
 option remote-subvolume brick        # name of the remote volume
end-volume

Good luck
Andoni Ayala

2009/12/21 anthony garnier <sokar6012 at hotmail.com>

>
> Hi,
> I just installed the last version of glusterfs 3.0 and I got really bad
> performance.
> Here it is :
>  # dd if=/dev/zero of=/users/glusterfs_mnt/sample bs=1k count=100000
>
> This create a file of 100Mo and I got those results :
> NFS = 75 Mo/s
> Gluster=5,8Mo/s
>
> I tried to change block size, change value of write behind parameters, to
> add Read Ahead translator, to remove all performance translator( and it's
> worst :\ ), try with afr translator ... but no change!
>
> My configuration is : replicated and distributed (RAID 10 over network) on
> server side over 4 bricks, Giga Ethernet on all servers and clients
>
> What kind of performance do you have? Is it normal?
> I also tried to run a iostat test and it run during 24H of CPU time (3 days
> at all)....
> Here my vol file of server and client computer :
>
> # /etc/glusterfs/glusterfsd.vol
>
> volume posix
>  type storage/posix
>  option directory /users/gluster-data
> end-volume
>
> volume locks
>  type features/locks
>  subvolumes posix
> end-volume
>
> volume brick
>  type performance/io-threads
>  option thread-count 8
>  subvolumes locks
> end-volume
>
> volume server
>  type protocol/server
>  option transport-type tcp/server
>  option auth.addr.brick.allow *
>  subvolumes brick
> end-volume
> ------------------------------------------------
> #/etc/glusterfs/glusterfs.vol
>
> volume brick1
>  type protocol/client
>  option transport-type tcp
>  option remote-host       # IP address of the remote brick
>  option remote-subvolume brick        # name of the remote volume
> end-volume
>
> volume brick2
>  type protocol/client
>  option transport-type tcp
>  option remote-host       # IP address of the remote brick
>  option remote-subvolume brick        # name of the remote volume
> end-volume
>
> volume brick3
>  type protocol/client
>  option transport-type tcp
>  option remote-host
>  option remote-subvolume brick
> end-volume
>
> volume brick4
>  type protocol/client
>  option transport-type tcp
>  option remote-host
>  option remote-subvolume brick
> end-volume
>
> volume rep1
>  type cluster/replicate
>  subvolumes brick1 brick2
> end-volume
>
> volume rep2
>  type cluster/replicate
>  subvolumes brick3 brick4
> end-volume
>
> volume distribute
>  type cluster/distribute
>  subvolumes rep1 rep2
> end-volume
>
> volume writebehind
>  type performance/write-behind
>  option window-size 1MB
>  subvolumes distribute
> end-volume
>
> volume cache
>  type performance/io-cache
>  option cache-size 512MB
>  subvolumes writebehind
> end-volume
>
>
>
>
> _________________________________________________________________
> T?l?chargez Internet Explorer 8 et surfez sans laisser de trace !
> http://clk.atdmt.com/FRM/go/182932252/direct/01/
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux