El vie, 26-03-2010 a las 17:44 +0000, Ian Rogers escribi?: > Hi Ramiro > > ideas off the top of my head: > > Get rid of performance/quick-read - it has a memory leak bug due to be > fixed in gluster v3.0.5 Thanks for the tip ;-) > If the files are going to be accessed by a program (which doesn't list > the directories often) rather than a user (who might) then you can get > rid of performance/stat-prefetch too. Oh! Perfect. > In the long term - will this cluster be mostly used for serving files > (ie. read-mostly) or will you be creating files as often as reading > them? Will be creating files as often reading ... > If mostly-read then get rid of performance/write-behind. Also the > calculated option cache-size only uses 20% of your ram for the io-cache. > Hard code it to a value you know works best. See also my write-up > http://www.sirgroane.net/2010/03/tuning-glusterfs-for-apache-on-ec2 perhaps. > > As you have physical disks then I'm guessing performance/read-ahead > should be good for you. > > Does the genfiles.sh script allow you to simulate multiple processes - > if not then you're not seeing the full benefit of your 6 back-end stores... You coud run the genfiles script simultaneuosly (my english is really poor, we can change the subject of this mail for something like "poor performance and poor english" xDDD) but its not like a thread aplication (iozone rulez). If I run 3 process of genfiles.sh i get 440, 441, and 450 files. (1300 files aprox.) but if you add some more procees you're not going to obtain any big number :) With 6 genfiles at the same time i have : PID 12832 : 249 files created in 60 seconds. PID 12830 : 249 files created in 60 seconds. PID 12829 : 248 files created in 60 seconds. PID 12827 : 262 files created in 60 seconds. PID 12828 : 252 files created in 60 seconds. PID 12831 : 255 files created in 60 seconds. 1515 files . Thanks for your ideas ;) -- Ramiro Magallanes <listas at sabueso.org>