Performance and redundancy help

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Please forgive my ignorance, but I am not sure I understand about the client vs server side replication.

Could someone throw out a quick paragraph describing the 2 and/or a pro/con list?
If I do client side replication, how to I access the files on the servers?
Am I relying on the clients to sync the data between the servers?

There are times when no clients are up and the servers are "doing things" to the file system.

Thanks.

^C



Smart Weblications GmbH - Florian Wiessner wrote:
> Hi Chad,
> 
> Am 23.02.2010 05:30, schrieb Chad:
>> I finally got the servers transported 2000 miles, set-up, wired, and
>> booted.
>> Here are the vol files.
>> Just to reiterate, the issues are slow performance on read/write, and
>> clients hanging when 1 server goes down.
> 
> Yes, because you do server-side replication. Try client-side replication, then
> you wouldn't need loadbalancers and the client will not hang if one server is
> down...
> 
> 
>> ### glusterfs.vol ###
>> ############################################
>> # Start tcb_cluster
>> ############################################
>> # the exported volume to mount                    # required!
>> volume tcb_cluster
>>         type protocol/client
>>         option transport-type tcp/client
>>         option remote-host glustcluster
>>         option remote-port 50001
>>         option remote-subvolume tcb_cluster
>> end-volume
>>
>> ############################################
>> # Start cs_cluster
>> ############################################
>> # the exported volume to mount                    # required!
>> volume cs_cluster
>>         type protocol/client
>>         option transport-type tcp/client
>>         option remote-host glustcluster
>>         option remote-port 50002
>>         option remote-subvolume cs_cluster
>> end-volume
>>
>> ############################################
>> # Start pbx_cluster
>> ############################################
>> # the exported volume to mount                    # required!
>> volume pbx_cluster
>>         type protocol/client
>>         option transport-type tcp/client
>>         option remote-host glustcluster
>>         option remote-port 50003
>>         option remote-subvolume pbx_cluster
>> end-volume
>>
>>
>> ---------------------------------------------------
>> ### glusterfsd.vol ###
>> #############################################
>> # Start tcb_data cluster
>> #############################################
>> volume tcb_local
>>         type storage/posix
>>         option directory /mnt/tcb_data
>> end-volume
>>
>> volume tcb_locks
>>         type features/locks
>>         option mandatory-locks on          # enables mandatory locking
>> on all files
>>         subvolumes tcb_local
>> end-volume
>>
>> # dataspace on remote machine, look in /etc/hosts to see that
>> volume tcb_locks_remote
>>         type protocol/client
>>         option transport-type tcp
>>         option remote-port 50001
>>         option remote-host 192.168.1.25
>>         option remote-subvolume tcb_locks
>> end-volume
>>
>> # automatic file replication translator for dataspace
>> volume tcb_cluster_afr
>>         type cluster/replicate
>>         subvolumes tcb_locks tcb_locks_remote
>> end-volume
>>
>> # the actual exported volume
>> volume tcb_cluster
>>         type performance/io-threads
>>         option thread-count 256
>>         option cache-size 128MB
>>         subvolumes tcb_cluster_afr
>> end-volume
>>
>> volume tcb_cluster_server
>>         type protocol/server
>>         option transport-type tcp
>>         option transport.socket.listen-port 50001
>>         option auth.addr.tcb_locks.allow *
>>         option auth.addr.tcb_cluster.allow *
>>         option transport.socket.nodelay on
>>         subvolumes tcb_cluster
>> end-volume
>>
>> #############################################
>> # Start cs_data cluster
>> #############################################
>> volume cs_local
>>         type storage/posix
>>         option directory /mnt/cs_data
>> end-volume
>>
>> volume cs_locks
>>         type features/locks
>>         option mandatory-locks on          # enables mandatory locking
>> on all files
>>         subvolumes cs_local
>> end-volume
>>
>> # dataspace on remote machine, look in /etc/hosts to see that
>> volume cs_locks_remote
>>         type protocol/client
>>         option transport-type tcp
>>         option remote-port 50002
>>         option remote-host 192.168.1.25
>>         option remote-subvolume cs_locks
>> end-volume
>>
>> # automatic file replication translator for dataspace
>> volume cs_cluster_afr
>>         type cluster/replicate
>>         subvolumes cs_locks cs_locks_remote
>> end-volume
>>
>> # the actual exported volume
>> volume cs_cluster
>>         type performance/io-threads
>>         option thread-count 256
>>         option cache-size 128MB
>>         subvolumes cs_cluster_afr
>> end-volume
>>
>> volume cs_cluster_server
>>         type protocol/server
>>         option transport-type tcp
>>         option transport.socket.listen-port 50002
>>         option auth.addr.cs_locks.allow *
>>         option auth.addr.cs_cluster.allow *
>>         option transport.socket.nodelay on
>>         subvolumes cs_cluster
>> end-volume
>>
>> #############################################
>> # Start pbx_data cluster
>> #############################################
>> volume pbx_local
>>         type storage/posix
>>         option directory /mnt/pbx_data
>> end-volume
>>
>> volume pbx_locks
>>         type features/locks
>>         option mandatory-locks on          # enables mandatory locking
>> on all files
>>         subvolumes pbx_local
>> end-volume
>>
>> # dataspace on remote machine, look in /etc/hosts to see that
>> volume pbx_locks_remote
>>         type protocol/client
>>         option transport-type tcp
>>         option remote-port 50003
>>         option remote-host 192.168.1.25
>>         option remote-subvolume pbx_locks
>> end-volume
>>
>> # automatic file replication translator for dataspace
>> volume pbx_cluster_afr
>>         type cluster/replicate
>>         subvolumes pbx_locks pbx_locks_remote
>> end-volume
>>
>> # the actual exported volume
>> volume pbx_cluster
>>         type performance/io-threads
>>         option thread-count 256
>>         option cache-size 128MB
>>         subvolumes pbx_cluster_afr
>> end-volume
>>
>> volume pbx_cluster_server
>>         type protocol/server
>>         option transport-type tcp
>>         option transport.socket.listen-port 50003
>>         option auth.addr.pbx_locks.allow *
>>         option auth.addr.pbx_cluster.allow *
>>         option transport.socket.nodelay on
>>         subvolumes pbx_cluster
>> end-volume
>>
>>
>> -- 
>> ^C
>>
>>
>>
>> Smart Weblications GmbH - Florian Wiessner wrote:
>>> Hi,
>>>
>>> Am 16.02.2010 01:58, schrieb Chad:
>>>> I am new to glusterfs, and this list, please let me know if I have made
>>>> any mistakes in posting this to the list.
>>>> I am not sure what your standards are.
>>>>
>>>> I came across glusterfs last week, it was super easy to set-up and test
>>>> and is almost exactly what I want/need.
>>>> I set up 2 "glusterfs servers" that serve up a mirrored raid5 disk
>>>> partitioned into 3 5oogb partitions to 6 clients.
>>>> I am using round robin DNS, but I also tried to use heartbeat and
>>>> ldirectord (see details below).
>>>> Each server has 2 NICs: 1 for the clients, the other has a cross over
>>>> cable connecting the 2 servers. Both NICs are 1000mbps.
>>>>
>>>> There are only 2 issues.
>>>> #1. When one of the servers goes down the clients hang at least for a
>>>> little while (more testing is needed) I am not sure if the clients can
>>>> recover at all.
>>>> #2. The read/write tests I performed came in at 1.6 when using
>>>> glusterfs, NFS on all the same machines came in at 11, and a direct test
>>>> on the data server came
>>>> in at 111. How do I improve the performance?
>>>>
>>> please share your vol-files. i don't understand why you would need
>>> loadbalancers.
>>>
>>>> ###############################################
>>>> My glusterfs set-up:
>>>> 2 supermicro dual Xeon 3.0 ghz CPUs, 8gb ram, 4 @ 750gb seagate sata
>>>> HDs, 3 in raid5 with 1 hot spare. (data servers)
>>> why not use raid10? same capacity, better speed..
>>>
>>>> 6 supermicro dual AMD 2.8 ghz CPUs, 4gb ram, 2 @ 250gb seagate sata HDs
>>>> in raid 1. (client machines)
>>>> glusterfs is set-up with round robin DNS to handle the load balancing of
>>>> the 2 data servers.
>>> afaik there is no need to setup dns rr nor loadbalancing for the gluster
>>> servers, glusterfs should take care of that itself. but without your
>>> volfiles i
>>> can't give any hints.
>>>
>>>
> 
> 
> 


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux