small write speed problem on EBS, distributed replica

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Vikas, Mohit,
  I should disclose our typical use cases:
We need to read and write files of size several 100s of MBs - the
ratio of read : write is about 1:1.

> What did you use to calculate latency?

I used http://www.bitmover.com/lmbench they have a tool "lat_tcp".

Numbers below are from lmbench tool "bw_tcp":

> Network bandwidths:
> dfs01: 54 MB/s
> dfs02: 62.5 MB/s
> dfs03: 64 MB/s
> dfs04: 91.5 MB/s

The setup is Gluster native, no NFS.

About the "Optimizing Gluster" link - I have seen it before, but there
are several things I don't understand:

1.) Tuning FUSE to use larger blocksize - when testing PVFS, we
achieved best performance with bs = 4MB.
It's hard to understand why it's hardcoded to 128 KB.
Also I have read somewhere else (referencing FUSE) - that larger
blocksize doesn't yield more performance.
I guess when transfering larger amount of data on network with
significant latency,
a lot less IO requests should result in higher throughput. (And it's
cheaper also on EBS).

Are those listed adjustments to FUSE kernel modules still applicable?

2.) Enabling direct-io mode
Does this work on current 3.1.2? :

glusterfs --direct-io-mode=write-only -f <spec-file> <mount-point>

also with --direct-io-mode=read-write ?

Of those parameters in "Setting Volume Options", could this one help:
- performance.write-behind-window-size - increasing 10-20 times?

Now, the raw block device throughput (dd if=/dev/zero
of=/path/to/ebs/mount bs=128k count=4096 oflag=direct)
3 measurements on server machines dfs0[1-4]:

dfs01: 9.0 MB/s, 16.4 MB/s, 18.4 MB/s
dfs02: 26.0 MB/s, 28.5 MB/s, 13.0 MB/s
dfs03: 14.4 MB/s, 11.8 MB/s, 32.6 MB/s
dfs04: 35.5 MB/s, 33.1 MB/s, 31.9 MB/s

This, indeed, varies considerably!

Thanks for help.
Karol


On Wed, Mar 23, 2011 at 7:06 PM, Vikas Gorur <vikas at gluster.com> wrote:
> Karol,
>
> A few general pointers about EBS performance:
>
> We've seen throughput to an EBS volume vary considerably. Since EBS is iSCSI underneath, throughput to a volume can fluctuate, and it is also possible that your instance is on degraded hardware that gets very low throughput to the volume.
>
> So I would advise you to first gather some data about all your EBS volumes. You can measure throughput to them by doing something like:
>
> dd if=/dev/zero of=/path/to/ebs/mount bs=128k count=4096 oflag=direct
>
> The "oflag=direct" will give us the raw block device throughput, without the kernel cache in the way.
>
> The performance you see on the Gluster mountpoint will be a function of the EBS performance. You might also want to spin up a couple more instances and see their EBS throughput to get an idea of the range of EBS performance.
>
> Doing a RAID0 of 4 or 8 EBS volumes using mdadm will also help you increase performance.
>
> ------------------------------
> Vikas Gorur
> Engineer - Gluster, Inc.
> ------------------------------
>
>
>
>
>
>
>
>
>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux