Slow Replication over infiniband

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ok, i've made another test, it seems that Gluster works perfectly, the
problem comes from my RAID card configuration.

So I will keep you inform,

Joel

On Thu, Oct 29, 2009 at 9:35 AM, joel vennin <joel.vennin at gmail.com> wrote:

> Thank you for the reply, but this kind of config change nothing concerning
> the replication transfer rate.
>
> Note that glusterfs cpu usage on all box takes less than 3% of CPU.
>
> All the box can write data with a transfer rate at 800MB/s, but i'm still
> blocked at 30MB/s
>
> Any help/suggestion is welcome
>
> Joel
>
>
> On Thu, Oct 29, 2009 at 9:04 AM, Christian Marnitz <
> christian.marnitz at icesmedia.de> wrote:
>
>> Hi,
>>
>> take a look to the following performance-features:
>>
>> ###################################################
>> ##### performance
>> ###################################################
>> volume iot
>>  type performance/io-threads
>>  option thread-count 16  # deault is 16
>>  option min-threads 32
>>  option max-threads 256
>>  subvolumes mega-replicator
>> end-volume
>>
>> ##### writebehind - write aggregation
>> volume wb
>>  type performance/write-behind
>>  option cache-size 3MB         # default is equal to aggregate-size
>>  option flush-behind on        # default is 'off'
>>  subvolumes iot
>> end-volume
>>
>> ##### read-ahead - read aggregation
>> volume readahead
>>  type performance/read-ahead
>>  #option page-size 1048576      # 128kb is the default option
>>  option page-count 4           # 2 is default option
>>  option force-atime-update off # default is off
>>  subvolumes wb
>> end-volume
>>
>>
>> ###### no chaching - disable if need realtime
>> volume ioc
>>  type performance/io-cache
>>  option cache-size 128MB   # default is 32MB
>>  option page-size 1MB      #128KB is default option
>>  option cache-timeout 1        # default is 1
>>  subvolumes wb
>> end-volume
>>
>> ---
>>
>> Here is the info written, what happens behind:
>>
>>
>> http://gluster.com/community/documentation/index.php/Translators/performance/readahead
>>
>> http://gluster.com/community/documentation/index.php/Translators/performance/writebehind
>>
>>
>> http://gluster.com/community/documentation/index.php/Translators/performance/io-threads
>>
>> http://gluster.com/community/documentation/index.php/Translators/performance/io-cache
>>
>>
>> I personally use writebehind and io-threads. It depends on your requests,
>> what you need.
>>
>> Greetings,
>> Christian
>>
>>
>>
>>
>> -----Urspr?ngliche Nachricht-----
>> Von: gluster-users-bounces at gluster.org [mailto:
>> gluster-users-bounces at gluster.org] Im Auftrag von joel vennin
>> Gesendet: Donnerstag, 29. Oktober 2009 07:44
>> An: gluster-users at gluster.org
>> Betreff: Re: Slow Replication over infiniband
>>
>> Please, some one can give me clue concerning the increase of the transfer
>> rate, I ready to hack code if necessary.
>>
>> Thank you.
>>
>> On Tue, Oct 27, 2009 at 3:30 PM, joel vennin <joel.vennin at gmail.com>
>> wrote:
>>
>> > Hi, I've a setup composed with 3 nodes(zodiac1, zodiac2, zodiac3)
>> >
>> > Node config: 8 cores W5580  at 3.20GHz, 76 Go RAM, 40 To, infiniband
>> > mlx4_0 (20 Gb) Gluster Version: 2.0.7 Kernel Linux 2.6.30 / Fuse 2.7.4
>> >
>> >
>> > Config file:
>> >
>> > #----------------------------------------------------------
>> > # SERVER SIDE
>> > volume posix-1
>> >        type storage/posix
>> >        option directory /data/yacht-data2 end-volume
>> >
>> > volume locks-1
>> >   type features/locks
>> >   subvolumes posix-1
>> > end-volume
>> >
>> >
>> > volume brick-1
>> >        type performance/io-threads
>> >        option thread-count 4
>> >        subvolumes locks-1
>> > end-volume
>> >
>> > volume server
>> >        type protocol/server
>> >        option transport-type                     ib-verbs
>> >        option transport.ib-verbs.device-name     mlx4_0
>> >        option auth.addr.brick-1.allow *
>> >        subvolumes brick-1 #brick-2
>> > end-volume
>> >
>> >
>> > # CLIENT NODE DECLARATION
>> >
>> > #
>> > # ZODIAC 1 CLIENT
>> > volume zbrick1
>> >        type protocol/client
>> >        option transport-type ib-verbs
>> >        option remote-host 192.168.3.200
>> >        option remote-subvolume brick-1 end-volume
>> >
>> > # ZODIAC 2
>> > volume zbrick2
>> >        type protocol/client
>> >        option transport-type   ib-verbs
>> >        option remote-host      192.168.3.201
>> >        option remote-subvolume brick-1 end-volume
>> >
>> > # ZODIAC 3
>> > volume zbrick3
>> >        type protocol/client
>> >        option transport-type   ib-verbs
>> >        option remote-host      192.168.3.202
>> >        option remote-subvolume brick-1 end-volume
>> >
>> > # MEGA REPLICATE
>> > volume mega-replicator
>> >        type cluster/replicate
>> >        subvolumes zbrick1 zbrick2 zbrick3 end-volume
>> > #----------------------------------------------------------
>> >
>> > Command on each server to start glusterfs:
>> >    glusterfs -f myreplica.vol /mnt/data/
>> >
>> >
>> >
>> > The scenario that I use:
>> >     zodiac1 and zodiac2 are correctly synchronized. Once every thing
>> > is ok for zodiac1 and zodiac2, I decide to start the zodiac3. I order
>> > to synchronize it, I do on the box ls -aLR /mnt/data. So the
>> > replication start but the transfer rate is really slow: around 30 MB/s
>> > !
>> >
>> > Have you an idea how can I increase this rate ?
>> >
>> >
>> > Thank you.
>> >
>> >
>> >
>> >
>> >
>>
>>
>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux