On Wednesday 04 May 2011 01:38 PM, Aleksanyan, Aleksandr wrote: > GlusterFS configuration topology, please look in attached file. Thanks. Nice description of the setup! One immediate thing that comes to my mind is that you might be seeing the effect of write caching on the backend disk filesystem. Can you tell me what disk filesystem you are using? And I presume you are using Linux on the OSS servers ? Pavan > > Server conf: > 2x Intel Xeon 5570, 12Gb RAM > > Client Conf: > 2x Intel Xeon 5670, 12Gb RAM > > gluster volume info: > > Volume Name: gluster > Type: Distribute > Status: Started > Number of Bricks: 16 > Transport-type: rdma > Bricks: > Brick1: oss1:/mnt/ost1 > Brick2: oss1:/mnt/ost2 > Brick3: oss1:/mnt/ost3 > Brick4: oss1:/mnt/ost4 > Brick5: oss2:/mnt/ost1 > Brick6: oss2:/mnt/ost2 > Brick7: oss2:/mnt/ost3 > Brick8: oss2:/mnt/ost4 > Brick9: oss3:/mnt/ost1 > Brick10: oss3:/mnt/ost2 > Brick11: oss3:/mnt/ost3 > Brick12: oss3:/mnt/ost4 > Brick13: oss4:/mnt/ost1 > Brick14: oss4:/mnt/ost2 > Brick15: oss4:/mnt/ost3 > Brick16: oss4:/mnt/ost4 > > > > ? ?????????, > ????????? ????????? > ??? "?-?????????" > ???: +7(495)744-0980 (1434) > ________________________________________ > ??: Pavan [tcp at gluster.com] > ??????????: 4 ??? 2011 ?. 11:44 > ????: Aleksanyan, Aleksandr > ?????: gluster-users at gluster.org > ????: Re: GlusterFS Benchmarks > > On Wednesday 04 May 2011 12:44 PM, Aleksanyan, Aleksandr wrote: >> I test GlusterFS on this equipment: >> >> Backend LSI 7000, 80Tb, 24LUN's >> 4 OSS, Intel based server, connect to LSI via 8Gb FiberChanel, 12Gb RAM > > Can you please clarify what OSS here means? > > And, please mention what your GlusterFS configuration looks like. > > Pavan > >> 1 Intel based main server, connect to OSS via QDR InfiniBand, 12Gb RAM >> and 16 Load Generators with 2 Xeon X5670, on board, 12Gb RAM. QDR InfiniBand >> I use IOR for test , and get next results: >> /install/mpi/bin/mpirun --hostfile /gluster/C/nodes_1p -np 16 >> /gluster/C/IOR -F -k -b10G -t1m >> IOR-2.10.3: MPI Coordinated Test of Parallel I/O >> Run began: Tue Oct 19 09:27:03 2010 >> Command line used: /gluster/C/IOR -F -k -b10G -t1m >> Machine: Linux node1 >> Summary: >> api = POSIX >> test filename = testFile >> access = file-per-process >> ordering in a file = sequential offsets >> ordering inter file= no tasks offsets >> clients = 16 (1 per node) >> repetitions = 1 >> xfersize = 1 MiB >> blocksize = 10 GiB >> aggregate filesize = 160 GiB >> Operation Max (MiB) Min (MiB) Mean (MiB) Std Dev Max (OPs) Min (OPs) >> Mean (OPs) Std Dev Mean (s) >> --------- --------- --------- ---------- ------- --------- --------- >> ---------- ------- -------- >> write *1720.80* 1720.80 1720.80 0.00 1720.80 1720.80 1720.80 0.00 >> 95.21174 EXCEL >> read *1415.64* 1415.64 1415.64 0.00 1415.64 1415.64 1415.64 0.00 >> 115.73604 EXCEL >> Max Write: 1720.80 MiB/sec (1804.39 MB/sec) >> Max Read: 1415.64 MiB/sec (1484.40 MB/sec) >> Run finished: Tue Oct 19 09:30:34 2010 >> Why *read *< *write* ? It's normal for GlusterFS ? >> best regards >> Aleksandr >> ? ?????????, >> ????????? ????????? >> ??? "?-?????????" >> ???: +7(495)744-0980 (1434) >> >> >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users