On 05/04/2011 03:14 AM, Aleksanyan, Aleksandr wrote: > I test GlusterFS on this equipment: [...] > Max Write: 1720.80 MiB/sec (1804.39 MB/sec) > Max Read: 1415.64 MiB/sec (1484.40 MB/sec) hmmm ... seems low. With 24 bricks we were getting ~10+ GB/s 2 years ago on the 2.0.x series of code. You might have a bottleneck somewhere in the Fibre channel portion of things. > Run finished: Tue Oct 19 09:30:34 2010 > Why *read *< *write* ? It's normal for GlusterFS ? Its generally normal for most cluster/distributed file systems that have any sort of write caching (RAID, brick OS write cache, etc.) You can absorb the write into cache (16 units mean only 10GB ram required per unit to cache), and commit it later. When we do testing on our units, we recommend using data sizes that far exceed any conceivable cache. We regularly do single machine TB sized reads and writes (as well as cluster storage reads and writes in the 1-20TB region) as part of our normal testing regimen. We recommend reporting the non-cached performance numbers as that is what users will often see (as a nominal case). Regards Joe -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics, Inc. email: landman at scalableinformatics.com web : http://scalableinformatics.com http://scalableinformatics.com/sicluster phone: +1 734 786 8423 x121 fax : +1 866 888 3112 cell : +1 734 612 4615