Are there any entries in your NFS log files and native client log files? Avati On Thu, May 31, 2012 at 6:49 AM, Roman Alekseev <rs.alekseev at gmail.com>wrote: > GlusterFS shows the worst result: > Gluster NFS: > server96:/server18/ortoeuromag**.ru <http://ortoeuromag.ru> on /glusterfs > type nfs (rw,mountproto=tcp,vers=3,**addr=192.168.65.161) > time find . | wc -l > 83949 > > real 75m5.243s > user 0m0.806s > sys 0m5.430s > > Native client: > server96:/server18 on /mnt type fuse.glusterfs (rw,allow_other,max_read=** > 131072) > time find . | wc -l > 83949 > > real 47m3.149s > user 0m0.904s > sys 0m4.320s > > NFS without gluster: > > time find . | wc -l > 83931 > > real 0m13.420s > user 0m0.332s > sys 0m3.232s > > vol info: > Volume Name: server18 > Type: Distributed-Replicate > Status: Started > Number of Bricks: 15 x 3 = 45 > Transport-type: tcp > Bricks: > Brick1: server96:/mnt/sdd5 > Brick2: server29:/mnt/sdb5 > Brick3: server89:/mnt/sdd5 > Brick4: server96:/mnt/sdd6 > Brick5: server29:/mnt/sdb6 > Brick6: server89:/mnt/sdd6 > Brick7: server66:/mnt/sdd7 > Brick8: server29:/mnt/sdb7 > Brick9: server89:/mnt/sdd7 > Brick10: server66:/mnt/sdd8 > Brick11: server29:/mnt/sdb8 > Brick12: server89:/mnt/sdd8 > Brick13: server96:/mnt/sda5 > Brick14: server29:/mnt/sdd5 > Brick15: server89:/mnt/sda5 > Brick16: server96:/mnt/sda6 > Brick17: server29:/mnt/sdd6 > Brick18: server89:/mnt/sda6 > Brick19: server29:/mnt/sdd7 > Brick20: server89:/mnt/sda7 > Brick21: server66:/mnt/sdb7 > Brick22: server29:/mnt/sdd8 > Brick23: server89:/mnt/sda8 > Brick24: server66:/mnt/sdb8 > Brick25: server29:/mnt/sdc8 > Brick26: server89:/mnt/sdc8 > Brick27: server66:/mnt/sdc8 > Brick28: server96:/mnt/sdc5 > Brick29: server29:/mnt/sdc5 > Brick30: server89:/mnt/sdc5 > Brick31: server96:/mnt/sdc6 > Brick32: server29:/mnt/sdc6 > Brick33: server89:/mnt/sdc6 > Brick34: server66:/mnt/sdc7 > Brick35: server89:/mnt/sdc7 > Brick36: server29:/mnt/sdc7 > Brick37: server29:/mnt/sda5 > Brick38: server89:/mnt/sdb5 > Brick39: server66:/mnt/sdd5 > Brick40: server29:/mnt/sda6 > Brick41: server89:/mnt/sdb6 > Brick42: server66:/mnt/sdd6 > Brick43: server29:/mnt/sda7 > Brick44: server89:/mnt/sdb7 > Brick45: server66:/mnt/sdb5 > Options Reconfigured: > performance.io-thread-count: 32 > features.quota-timeout: 600 > performance.cache-size: 256MB > features.quota: off > auth.allow: * > > I have 4 servers(server66, server89, server29, server96 ) and 2 dedicated > servers(clients) which are connected to glusterfs storage. > Also could you please explain your advise with more details? > How to tune my storage?? Is someone who know how to configure this > distributed file systems correctly? > > -- > Kind regards, > > R. Alekseev > > ______________________________**_________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/**mailman/listinfo/gluster-users<http://gluster.org/cgi-bin/mailman/listinfo/gluster-users> > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://gluster.org/pipermail/gluster-users/attachments/20120531/8c805d8f/attachment.htm>