Re: booster translator error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



There are some known issues with read-ahead and booster. please try without
booster. moreover, booster is effective when loaded on the server side.

avati

2008/5/16 nicolas prochazka <prochazka.nicolas@xxxxxxxxx>:

> Hi,
> I do some test with booster translator, and i ve found some issue :
> my test is very simple, client1  -> server1
>
> if i do :
>
> revelation vdisk # dd if=base of=dest bs=10MB
> 629+1 records in
> 629+1 records out
> 6290378752 bytes (6.3 GB) copied, 135.844 s, 46.3 MB/s
>
> all are ok.  I'm not using booster in this case.
>
> if i do :
>
> LD_PRELOAD=/usr/local/lib64/glusterfs/glusterfs-booster.so dd if=base
> of=test bs=10MB
> dd: writing `test': Transport endpoint is not connected
> 1+0 records in
> 0+0 records out
> 0 bytes (0 B) copied, 0.113639 s, 0.0 kB/s
> then i do a ls :
> revelation vdisk # ls
> ls: cannot open directory .: Transport endpoint is not connected
> if i do a cd /  and then cd /mnt/vdisk   , it works again.
>
>
> I'm using big file, sometime, /mnt/vdisk seems to be inacessible, dd
> shows this issues all the times.
>
>
> Regards
> Nicolas Prochazka
>
>
>
>
> Version :  GlusterFS 1.3.9
>
> Client conf file :
> volume client1
> type protocol/client
> option transport-type tcp/client
> option remote-host 10.98.98.1
> option remote-subvolume brick
> end-volume
>
> volume booster
>  type performance/booster
>  subvolumes client1
> end-volume
>
> volume readahead
> type performance/read-ahead
> option page-size 1MB
> option page-count 64
> subvolumes booster
> end-volume
>
> volume iothreads
> type performance/io-threads
> option thread-count 20
> subvolumes readahead
> end-volume
>
> volume io-cache
> type performance/io-cache
> option cache-size 1000MB             # default is 32MB
> option page-size 1MB               #128KB is default option
> option force-revalidate-timeout 1200  # default is 1
> subvolumes iothreads
> end-volume
>
> volume writebehind
> type performance/write-behind
> option aggregate-size 512KB # default is 0bytes
> option flush-behind on      # default is 'off'
> subvolumes io-cache
> end-volume
>
>
>
>
> Server conf file
> volume brick1
> type storage/posix
> option directory /mnt/disks/export
> end-volume
>
>
> volume brick
> type performance/io-threads
> option thread-count 32
> option cache-size 512MB
> subvolumes brick1
> end-volume
>
>
> volume readahead-brick
> type performance/read-ahead
> option page-size 1M
> option page-count 128
> subvolumes brick
> end-volume
>
> volume server
> option window-size 2097152
> type protocol/server
> subvolumes readahead-brick
> option transport-type tcp/server     # For TCP/IP transport
> option client-volume-filename /etc/glusterfs/glusterfs-client.vol
> option auth.ip.brick.allow *
> end-volume
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxx
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>



-- 
If I traveled to the end of the rainbow
As Dame Fortune did intend,
Murphy would be there to tell me
The pot's at the other end.


[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux