Resending if anyone has ideas..
On Tue, Apr 28, 2015 at 3:32 PM, Raghuram BK <ram@xxxxxxxxxxxxx> wrote:
9. Read the file directly from the underlying storage :7. Populate a test file into the volume :4. Start it (gluster volume start g1)1. Force the underlying filesystem(ZFS) to always read from diskI've been trying to figure out the performance overhead of gluster over the underlying filesystem and find the difference to be quite stark. Is this normal? Can something be done about it? I've tried to eliminate the network overhead by doing everything locally and eliminate the effect of caching by forcing hits to the hard drives.. Here's what I did :2. Create the underlying storage (zfs create frzpool/normal/d1)3. Create a gluster distributed volume with only one brick on the local machine. (gluster volume create g1 fractalio-pri.fractalio.lan:/frzpool/normal/d1 force)5. Check the volume info :
[root@fractalio-pri fractalio]# gluster v info g1
Volume Name: g1
Type: Distribute
Volume ID: e50f13d2-cb98-47f4-8113-3f15b4b6306a
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: fractalio-pri.fractalio.lan:/frzpool/normal/d1
6. Mount it (mount -t glusterfs localhost:/g1 /mnt/g1)
[root@fractalio-pri fractalio]# dd if=/dev/zero of=/mnt/g1/ddfile1 bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 8.4938 s, 247 MB/s
8. Read the file from the gluster mount :
[root@fractalio-pri fractalio]# dd if=/mnt/g1/ddfile1 of=/dev/zero bs=1M
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 84.4174 s, 24.8 MB/s
[root@fractalio-pri fractalio]# dd if=/frzpool/normal/d1/ddfile1 of=/dev/zero bs=1M
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 24.722 s, 84.8 MB/sThe throughput comes down from 84.8MB/s to 24.8MB/s, a 240% overhead?!
--
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel