Thanks Bernhard I will do this.
Regards,
2014-02-19 14:43 GMT-03:00 Bernhard Glomm <bernhard.glomm@xxxxxxxxxxx>:
I would strongly recommend to restart fresh with gluster 3.2.4 fromIt works totally fine for me.(reinstall the vms as slim as possible if you can.)As a quick howto consider this:- We have 2 Hardware machines (just desktop machines for dev-env)- both running zol- create a zpool and zfs filesystem- create a gluster replica 2 volume between hostA and hostB- installe 3 VM vmachine0{4,5,6}- vmachine0{4,5} each have a 100GB diskimage file as /dev/vdb which also resides on the glustervolume- create ext3 filesystem on vmachine0{4,5}:/dev/vdb1- create gluster replica 2 between vmachine04 and vmachine05 as shown below(!!!obviously nobody would do that in any serious environment,just to show that even a setup like that _would_ be possible!!!)- run some benchmarks on that volume and compare the results to otherSo:root@vmachine04[/0]:~ # mkdir -p /srv/vdb1/gf_brickroot@vmachine04[/0]:~ # mount /dev/vdb1 /srv/vdb1/root@vmachine04[/0]:~ # gluster peer probe vmachine05peer probe: success# now switch over to vmachine05 and doroot@vmachine05[/1]:~ # mkdir -p /srv/vdb1/gf_brickroot@vmachine05[/1]:~ # mount /dev/vdb1 /srv/vdb1/root@vmachine05[/1]:~ # gluster peer probe vmachine04peer probe: successroot@vmachine05[/1]:~ # gluster peer probe vmachine04peer probe: success: host vmachine04 port 24007 already in peer list# the peer probe from BOTH sides ist often forgotten# switch back to vmachine04 and continue withroot@vmachine04[/0]:~ # gluster peer statusNumber of Peers: 1Hostname: vmachine05Port: 24007Uuid: 085a1489-dabf-40bb-90c1-fbfe66539953State: Peer in Cluster (Connected)root@vmachine04[/0]:~ # gluster volume info layer_cake_volumeVolume Name: layer_cake_volumeType: ReplicateVolume ID: ef5299db-2896-4631-a2a8-d0082c1b25beStatus: StartedNumber of Bricks: 1 x 2 = 2Transport-type: tcpBricks:Brick1: vmachine04:/srv/vdb1/gf_brickBrick2: vmachine05:/srv/vdb1/gf_brickroot@vmachine04[/0]:~ # gluster volume status layer_cake_volumeStatus of volume: layer_cake_volumeGluster process Port Online Pid------------------------------------------------------------------------------Brick vmachine04:/srv/vdb1/gf_brick 49152 Y 12778Brick vmachine05:/srv/vdb1/gf_brick 49152 Y 16307NFS Server on localhost 2049 Y 12790Self-heal Daemon on localhost N/A Y 12791NFS Server on vmachine05 2049 Y 16320Self-heal Daemon on vmachine05 N/A Y 16319There are no active volume tasks# set any option you might likeroot@vmachine04[/1]:~ # gluster volume set layer_cake_volume network.remote-dio enablevolume set: success# go to vmachine06 and mount the volumeroot@vmachine06[/1]:~ # mkdir /srv/layer_cakeroot@vmachine06[/1]:~ # mount -t glusterfs -o backupvolfile-server=vmachine05 vmachine04:/layer_cake_volume /srv/layer_cakeroot@vmachine06[/1]:~ # mountvmachine04:/layer_cake_volume on /srv/layer_cake type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)root@vmachine06[/1]:~ # df -hFilesystem Size Used Avail Use% Mounted on...vmachine04:/layer_cake_volume 97G 188M 92G 1% /srv/layer_cakeAll fine and stable# now let's see how it tastes# note this is postmark on / NOT on the glustermounted layer_cake_volume!# that postmark results might be available tomorrow ;-)))root@vmachine06[/1]:~ # postmarkPostMark v1.51 : 8/14/01pm>set transactions 500000pm>set number 200000pm>set subdirectories 10000pm>runCreating subdirectories...DoneCreating files...DonePerforming transactions..........DoneDeleting files...DoneDeleting subdirectories...DoneTime:2314 seconds total2214 seconds of transactions (225 per second)Files:450096 created (194 per second)Creation alone: 200000 files (4166 per second)Mixed with transactions: 250096 files (112 per second)249584 read (112 per second)250081 appended (112 per second)450096 deleted (194 per second)Deletion alone: 200192 files (3849 per second)Mixed with transactions: 249904 files (112 per second)Data:1456.29 megabytes read (644.44 kilobytes per second)2715.89 megabytes written (1.17 megabytes per second)# reference# running postmark on the hardware machine directly on zfs## /test # postmark# PostMark v1.51 : 8/14/01# pm>set transactions 500000# pm>set number 200000# pm>set subdirectories 10000# pm>run# Creating subdirectories...Done# Creating files...Done# Performing transactions..........Done# Deleting files...Done# Deleting subdirectories...Done# Time:# 605 seconds total# 549 seconds of transactions (910 per second)## Files:# 450096 created (743 per second)# Creation alone: 200000 files (4255 per second)# Mixed with transactions: 250096 files (455 per second)# 249584 read (454 per second)# 250081 appended (455 per second)# 450096 deleted (743 per second)# Deletion alone: 200192 files (22243 per second)# Mixed with transactions: 249904 files (455 per second)## Data:# 1456.29 megabytes read (2.41 megabytes per second)# 2715.89 megabytes written (4.49 megabytes per second)dbench -D /srv/layer_cake 5Operation Count AvgLat MaxLat----------------------------------------NTCreateX 195815 5.159 333.296Close 143870 0.793 93.619Rename 8310 10.922 123.096Unlink 39525 2.428 203.753Qpathinfo 177736 2.551 220.605Qfileinfo 31030 2.057 175.565Qfsinfo 32545 1.393 174.045Sfileinfo 15967 2.691 129.028Find 68664 9.629 185.739WriteX 96860 0.841 108.863ReadX 307834 0.511 213.602LockX 642 1.511 10.578UnlockX 642 1.541 10.137Flush 13712 12.853 405.383Throughput 10.1832 MB/sec 5 clients 5 procs max_latency=405.405 ms# referencedbench -D /tmp 5# referencedbench -D /tmp 5Operation Count AvgLat MaxLat----------------------------------------NTCreateX 3817455 0.119 499.847Close 2804160 0.005 16.000Rename 161655 0.322 459.790Unlink 770906 0.556 762.314Deltree 92 20.647 81.619Mkdir 46 0.003 0.012Qpathinfo 3460227 0.017 18.388Qfileinfo 606258 0.003 11.652Qfsinfo 634444 0.006 14.976Sfileinfo 310990 0.155 604.585Find 1337732 0.056 18.466WriteX 1902611 0.245 503.604ReadX 5984135 0.008 16.154LockX 12430 0.008 9.111UnlockX 12430 0.004 4.551Flush 267557 4.505 902.093Throughput 199.664 MB/sec 5 clients 5 procs max_latency=902.099 ms
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users