Hi Vijay,
Do you have time to look into this issue yet.
Cw
On Tue, May 3, 2016 at 5:55 PM, qingwei wei <tchengwee@xxxxxxxxx> wrote:
Hi Vijay,I finally manage to do this test on the shared volume.gluster volume infoVolume Name: abctestType: Distributed-ReplicateVolume ID: 0db494e2-51a3-4521-a1ba-5d3479cecba2Status: StartedNumber of Bricks: 3 x 3 = 9Transport-type: tcpBricks:Brick1: abc11:/data/hdd1/abctestBrick2: abc12:/data/hdd1/abctestBrick3: abc14:/data/hdd1/abctestBrick4: abc16:/data/hdd1/abctestBrick5: abc17:/data/hdd1/abctestBrick6: abc20:/data/hdd1/abctestBrick7: abc22:/data/hdd1/abctestBrick8: abc23:/data/hdd1/abctestBrick9: abc24:/data/hdd1/abctestOptions Reconfigured:features.shard-block-size: 16MBfeatures.shard: onserver.allow-insecure: onstorage.owner-uid: 165storage.owner-gid: 165nfs.disable: trueperformance.quick-read: offperformance.io-cache: offperformance.read-ahead: offperformance.stat-prefetch: offcluster.lookup-optimize: oncluster.quorum-type: autocluster.server-quorum-type: servertransport.address-family: inetperformance.readdir-ahead: offResult is still the same.4k random write
IOPS 5355.75 Avg. response time (ms) 2.79 CPU utilization total (%) 96.73 CPU Privilegde time (%) 92.49 4k random read
4k random read IOPS 16718.93 Avg. response time (ms) 0.9 CPU utilization total (%) 79.2 CPU Privilegde time (%) 75.43 The snapshot of top -H while running 4k random writePID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND23850 qemu 20 0 10.294g 8.203g 11788 R 92.7 6.5 5:42.01 qemu-kvm24116 qemu 20 0 10.294g 8.203g 11788 S 34.2 6.5 1:27.91 qemu-kvm26948 qemu 20 0 10.294g 8.203g 11788 R 33.6 6.5 1:26.85 qemu-kvm24115 qemu 20 0 10.294g 8.203g 11788 S 32.9 6.5 1:27.72 qemu-kvm26937 qemu 20 0 10.294g 8.203g 11788 S 32.9 6.5 1:27.87 qemu-kvm27050 qemu 20 0 10.294g 8.203g 11788 R 32.9 6.5 1:17.14 qemu-kvm27033 qemu 20 0 10.294g 8.203g 11788 S 31.6 6.5 1:19.40 qemu-kvm24119 qemu 20 0 10.294g 8.203g 11788 S 26.6 6.5 1:32.16 qemu-kvm24120 qemu 20 0 10.294g 8.203g 11788 S 25.9 6.5 1:32.02 qemu-kvm23880 qemu 20 0 10.294g 8.203g 11788 S 8.3 6.5 2:31.11 qemu-kvm23881 qemu 20 0 10.294g 8.203g 11788 S 8.0 6.5 2:58.75 qemu-kvm23878 qemu 20 0 10.294g 8.203g 11788 S 7.6 6.5 2:04.15 qemu-kvm23879 qemu 20 0 10.294g 8.203g 11788 S 7.6 6.5 2:36.50 qemu-kvmThanks.CwOn Thu, Apr 21, 2016 at 10:12 PM, Vijay Bellur <vbellur@xxxxxxxxxx> wrote:I notice that you are using a stripe volume. Would it be possible toOn Wed, Apr 20, 2016 at 4:17 AM, qingwei wei <tchengwee@xxxxxxxxx> wrote:
> Gluster volume configuration, those bold entries are the initial settings i
> have
>
> Volume Name: g37test
> Type: Stripe
> Volume ID: 3f9dae3d-08f9-4321-aeac-67f44c7eb1ac
> Status: Created
> Number of Bricks: 1 x 10 = 10
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.123.4:/mnt/sdb_mssd/data
> Brick2: 192.168.123.4:/mnt/sdc_mssd/data
> Brick3: 192.168.123.4:/mnt/sdd_mssd/data
> Brick4: 192.168.123.4:/mnt/sde_mssd/data
> Brick5: 192.168.123.4:/mnt/sdf_mssd/data
> Brick6: 192.168.123.4:/mnt/sdg_mssd/data
> Brick7: 192.168.123.4:/mnt/sdh_mssd/data
> Brick8: 192.168.123.4:/mnt/sdj_mssd/data
> Brick9: 192.168.123.4:/mnt/sdm_mssd/data
> Brick10: 192.168.123.4:/mnt/sdn_mssd/data
> Options Reconfigured:
> server.allow-insecure: on
> storage.owner-uid: 165
> storage.owner-gid: 165
> performance.quick-read: off
> performance.io-cache: off
> performance.read-ahead: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> nfs.disable: true
>
test with a sharded volume? We will be focusing only on sharded
volumes for VM disks going forward.
Thanks,
Vijay
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel