horrible write performance after upgrade from 3.2 to 3.3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have a brick on a MD RAID5 array formatted with ext4 on a gigabit network.

My brick is located at /srv/media.
dd if=/dev/zero of=/srv/media/test.zero reports 150MB/sec
dd reading reports 300MB/sec

I have used iperf to verify it is not a network adapter issue. I get 
1gbit/sec each way.

On GlusterFS 3.2 my read and write performance was as expected. 
100MB/sec each way.

On GlusterFS 3.3, my read speed is still 100MB/sec but my write speed 
never exceeds 10MB/sec. It seems something is purposely throttling my 
writes like I'm on a 100mbit network.

CPU usage on the server and client are around 25% during the transfer. 
No other processes are eating I/O.

Any ideas on why write performance is suffering?

gluster> volume info

Volume Name: media
Type: Distribute
Volume ID: 990a5d58-f76c-405c-a7bf-096e70b9fed3
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 10.0.0.1:/srv/media
Options Reconfigured:
auth.allow: 10.0.0.*
nfs.disable: On
performance.cache-size: 128MB
performance.write-behind-window-size: 128MB

Thanks,
Michael


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux