performance stops at 1Gb

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Craig,

Using multiple parallel bonnie++ benchmarks (4,8,16) does use several
files. These file are 1GB each, and we take care there will be at least
32 of them. As we have multiple processes (4,8,16 bonnie++s) and each
uses several files, we spread the io over different storage bricks. I
can see this when monitoring network and disk activity on the bricks.
For example: when bonnie++ does block read/writes on a striped (4
bricks) volume I notice that the load of the client (network throughput)
is evenly spread over the 4 nodes. These nodes have enough cpu, memory,
network and disk resources left! The accoumulated throughput doesn't get
over the 1 Gb.
The 10Gb nic at the client is set to fixed 10Gb, full duplex, All the
nics on the storage bricks are 1Gb, fixed, full duplex. The 10Gb client
(dual quadcore, 16GB) has plenty of resources to run 16 bonnie++s
parallel. We should be able to get more than this 1Gb throughput,
especially with a striped volume.

What kind of benchmarks do you run? And with what kind of setup?

Peter 



> Peter -
>     Using Gluster the performance of any single file is going to be 
> limited to the performance of the server on which it exists, or in the

> case of a striped volume of the server on which the segment of the
file 
> you are accessing exists. If you were able to start 4 processes, 
> accessing different parts of the striped file, or lots of different 
> files in a distribute cluster you would see your performance increase 
> significantly.

> Thanks,

> Craig

> -->
> Craig Carl
> Senior Systems Engineer
> Gluster
> 
> 
> On 11/26/2010 07:57 AM, Gotwalt, P. wrote:
> > Hi All,
> >
> > I am doing some tests with gluster (3.1) and have a problem of not
> > getting higher throughput than 1 Gb (yes bit!) with 4 storage
bricks.
> > My setup:
> >
> > 4 storage bricks (dualcore, 4GB mem) each with 3 sata 1Tb disks,
> > connected to a switch with 1 Gb nics.  In my tests I only use 1 SATA
> > disk as a volume, per brick.
> > 1 client (2xquad core, 16 GB mem) with a 10Gb nic to the same switch
as
> > the bricks.
> >
> > When using striped of distributed configurations, with all 4 bricks
> > configured to act as a server, the performance will never be higher
than
> > just below 1 Gb! I tested with 4, 8 and 16 parallel bonnie++ runs.
> >
> > The idea is that parallel bonnie's create enough files to get
> > distributed over the storage bricks. And all this bonnie's wil
deliver
> > enough throughput to fill up this 10Gb line. I expect the throughput
to
> > be maximum 4Gb because that's the maximum the 4 storage bricks
together
> > can produce.
> >
> > I also tested the throughput of the network with iperf3 and got:
> > - 5Gb to a second temporary client on another switch 200 Km from my
> > site, connected with a 5Gb fiber
> > - 908-920 Mb to the interfaces of the bricks.
> > So the network seems ok.
> >
> > Can someone advise me on why I don't get 4Gb? Or can someone advise
me
> > on a better setup with the equipment I have?
> >
> >
> > Peter Gotwalt


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux