Hi Paul, You make a good point there. Hi Roland, Generally we have observed that it's good to have same number of gluster threads as the kernel threads ( or number of cores if not hyper-threading). You maybe be not just bottle-necking on CPU but also on disk. Did you check the iowaits ? One good way, since you have a powerful CPU is to have host/software raid ( unless you have hardware raid already ). Use lvm and stripe across all/part of the disks ( with raid5/raid6 if you like ). A 64k stripe size seems to work well ( not the best for all applications, so you will have to do your own experiment there for best performance ). Regards, Tejas. ----- Original Message ----- From: "pkoelle" <pkoelle at gmail.com> To: gluster-users at gluster.org Sent: Monday, July 19, 2010 9:57:25 PM Subject: Re: Performance degrade Am 19.07.2010 17:10, schrieb Roland Rabben: > I did try that on one of the clients. I removed all performance > translators except io-threads. No imporovement. > The server still use a hughe ammount of CPU. 36*8 = 288 threads alone for IO. I don't know specifics about GlusterFS but common knowledge suggests high thread counts are bad. You end up using all your CPU waiting for locks and in context switches. Why do you export each disk seperately? You don't seem to care about disk failure so you could put all disks in one LVM VG and export LVs from that. cheers Paul > > Roland > > 2010/7/19 Andre Felipe Machado<andremachado at techforce.com.br>: >> Hello, >> Did you try to minimize or even NOT use any cache? >> With so many nodes, the cache coherency between them may had become an issue... >> Regards. >> Andre Felipe Machado >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users >> >> > > > _______________________________________________ Gluster-users mailing list Gluster-users at gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users