Some Unify questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Deian,

On Fri, Oct 10, 2008 at 8:51 PM, Deian Chepishev <dchepishev at nexbrod.com>wrote:

> Hi guys,
>
> I have a few questions about UNIFY and volume creation.
>
> You will find my config files at the end of this post. I will post my
> questions before the config.
>
> 1. I want to use writebehind and readahead translators, because I think
> it speeds the transfer. Can you please take a look i let me know if it
> is correctly written.
> I basically do this:
> create one volume from the exported bricks lets say "unify"
> create another volume named "writebehind" with subvolumes unify
> then create another volume named "readahead" with subvolumes writebehind
> then mount the volume named writebehind.


If you are using  --volume-name option to glusterfs to attach to
writebehind, then you are bypassing readahead and hence will not get
readahead functionality. If you want to have both read-ahead and
write-behind functionalities, do not specify --volume-name option (or give
readahead as the argument to the option, if at all you want to use it).


>
> Is this correct or there is a better way to do this?
>
> 2. Does the UNIFY volume adds special attributes to the files like AFR ?


No


>
>
> 3. Is there a way to specify in /etc/fstab exactly which volume I want
> to mount?
> In the docs they say:
>
> 192.168.0.1 /mnt/glusterfs glusterfs defaults 0 0
>
> OR
>
> /etc/glusterfs/glusterfs-client.vol /mnt/glusterfs glusterfs defaults 0 0
>
> However I dont know which volume will be mounted if I have more than one
> defined in the config as in my case.
> Is there a way to point which one to be mounted.


simple way would be to have the volume which you want to attach to as the
last (bottom most) volume in volume-specfication file (you have to delete
the unnecessary volumes or disconnect them from the graph by not having them
in the subvolumes list of any other volumes).


>
> 4. I use Gigabit network and noticed that when I use dd to write a file
> I get around 115 MB/sec and during this time the glusterfs process
> spends around 30-40%
> CPU load on machine with Quad Core 2GHz Xeon, which seem quite high load
> to me.
>
> If I mount the GFS system locally on the server glusterfs loads the CPU
> to 100%
>
> Is this high load normal or I am missing something ?
> What can I do to lower the load ?
>
>
>
> I have the following server and client files:
>
>
> volume brick
>  type storage/posix
>  option directory /storage/gluster-export/data/
> end-volume
>
> volume brick-ns
>  type storage/posix
>  option directory /storage/gluster-export/ns
> end-volume
>
> ### Add network serving capability to above brick.
>
> volume server
>  type protocol/server
>  option transport-type tcp/server
>  subvolumes brick brick-ns
>  option auth.ip.brick.allow 10.1.124.*
>  option auth.ip.brick-ns.allow 10.1.124.*
> end-volume
>
> =========================
>
> Client:
>
> volume brick1-stor01
>  type protocol/client
>  option transport-type tcp/client
>  option remote-host 10.1.124.200
>  option remote-subvolume brick
> end-volume
>
> volume brick1-stor02
>  type protocol/client
>  option transport-type tcp/client
>  option remote-host 10.1.124.201
>  option remote-subvolume brick
> end-volume
>
> volume brick-ns1
>  type protocol/client
>  option transport-type tcp/client
>  option remote-host 10.1.124.200
>  option remote-subvolume brick-ns
> end-volume
>
>
> volume brick-ns2
>  type protocol/client
>  option transport-type tcp/client
>  option remote-host 10.1.124.201
>  option remote-subvolume brick-ns  # Note the different remote volume name.
> end-volume
>
> volume afr-ns
>  type cluster/afr
>  subvolumes brick-ns1 brick-ns2
> end-volume
>
> volume unify
>  type cluster/unify
>  option namespace afr-ns
>  option scheduler rr
>  option scheduler alu   # use the ALU scheduler
>  option alu.limits.min-free-disk  5%      # Don't create files one a
> volume with less than 5% free diskspace
>  option alu.limits.max-open-files 10000   # Don't create files on a
> volume with more than 10000 files open
>  option alu.order
> disk-usage:read-usage:write-usage:open-files-usage:disk-speed-usage
>  option alu.disk-usage.entry-threshold 100GB   # Kick in if the
> discrepancy in disk-usage between volumes is more than 2GB
>  option alu.disk-usage.exit-threshold  50MB   # Don't stop writing to
> the least-used volume until the discrepancy is 1988MB
>  option alu.open-files-usage.entry-threshold 1024   # Kick in if the
> discrepancy in open files is 1024
>  option alu.open-files-usage.exit-threshold 32   # Don't stop until 992
> files have been written the least-used volume
>  option alu.stat-refresh.interval 10sec   # Refresh the statistics used
> for decision-making every 10 seconds
>  subvolumes brick1-stor01 brick1-stor02
> end-volume
>
> volume writebehind
>  type performance/write-behind
>  option aggregate-size 512kb # default is 0bytes
>  option flush-behind on    # default is 'off'
>  subvolumes unify
> end-volume
>
> volume readahead
>  type performance/read-ahead
>  option page-size 512kB
>  option page-count 4
>  option force-atime-update off
>  subvolumes writebehind
> end-volume
>
>
>
> Sorry for the long post and thank you in advance.
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>


regards,

-- 
Raghavendra G
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://zresearch.com/pipermail/gluster-users/attachments/20081013/4fba4467/attachment.htm 


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux