Hello! I have GlusterFS installation with parameters: - 4 servers, connected by 1Gbit/s network (760-800 Mbit/s by iperf) - Distributed-replicated volume with 4 bricks and 2x4 redundancy formula. - Replicated volume with 2 bricks and 2x2 formula. I found some trouble: if I try to copy huge amount of files (94000 files, 3Gb size), this process takes terribly long time (from 20 to 40 minutes). I perform some tests and results is: Directly to storage (single 2TB HDD): 158MB/s Directly to storage (RAID1 of 2 HDDs): 190MB/s To Replicated gluster volume: 89MB/s To Distributed-replicated gluster volume: 49MB/s Test command is: sync && echo 3 > /proc/sys/vm/drop_caches && dd if=/dev/zero of=gluster.test.bin bs=1G count=1 Switching direct-io on and off doesn't have effect. Playing with glusterfs options too. What I can do with performance? My volumes: Volume Name: nginx Type: Replicate Volume ID: e3306431-e01d-41f8-8b2d-86a61837b0b2 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: control1:/storage/nginx Brick2: control2:/storage/nginx Volume Name: instances Type: Distributed-Replicate Volume ID: d32363fc-4b53-433c-87b7-ad51acfa4125 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: control1:/storage/instances Brick2: control2:/storage/instances Brick3: compute1:/storage/instances Brick4: compute2:/storage/instances Options Reconfigured: cluster.self-heal-window-size: 1 cluster.data-self-heal-algorithm: diff performance.stat-prefetch: 1 features.quota-timeout: 3600 performance.write-behind-window-size: 512MB performance.cache-size: 1GB performance.io-thread-count: 64 performance.flush-behind: on performance.cache-min-file-size: 0 performance.write-behind: on Mounted with default options by Gluster-FUSE. -- With best regards, differentlocal (www.differentlocal.ru | differentlocal at gmail.com), System administrator. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130227/7ee8d468/attachment.html>