On Sat, Aug 17, 2013 at 5:20 AM, Jeff Darcy <jdarcy at redhat.com> wrote: > On 08/16/2013 11:21 PM, Alexey Shalin wrote: > >> I wrote small script : >> #!/bin/bash >> >> for i in {1..1000}; do >> size=$((RANDOM%5+1)) >> dd if=/dev/zero of=/storage/test/bigfile${i} count=1024 bs=${size}k >> done >> >> This script creates files with different size on volume >> >> here is output: >> 2097152 bytes (2.1 MB) copied, 0.120632 s, 17.4 MB/s >> 1024+0 records in >> 1024+0 records out >> 1048576 bytes (1.0 MB) copied, 0.14548 s, 7.2 MB/s >> 1024+0 records in >> 1024+0 records out >> > > It looks like you're doing small writes (1-6KB) from a single thread. > That means network latency is going to be your primary limiting factor. > 20MB/s at 4KB is 5000 IOPS or 0.2ms per network round trip. You don't say > what kind of network you're using, but if it's Plain Old GigE that doesn't > seem too surprising. BTW, the NFS numbers are likely to be better because > the NFS client does more caching and you're not writing enough to fill > memory, so you're actually getting less durability than in the > native-protocol test and therefore the numbers aren't directly comparable. > > I suggest trying larger block sizes and higher I/O thread counts (with > iozone you can do this in a single command instead of a script). You > should see a pretty marked improvement. > > Also, small block size writes kill performance on FUSE because of the context switches (and lack of write caching in FUSE). Larger block size (>= 64KB) should start showing good performance. Avati -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130817/62d45cd7/attachment.html>