Re: ceph benchmark

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

as others already stated, your numbers don't add up or make sense.

More below.

On Thu, 16 Jun 2016 16:53:10 -0400 Patrick McGarry wrote:

> Moving this over to ceph-user where it’ll get the eyeballs you need.
> 
> On Mon, Jun 13, 2016 at 2:58 AM, Marcus Strasser
> <Marcus.Strasser@xxxxxxxxxxxxxxxx> wrote:
> > Hello!
> >
> >
> >
> > I have a little test cluster with 2 server. Each Server have an osd
> > with 800 GB, there is a 10 Gbps Link between the servers.
> >
What kind of OSDs are these?
The size suggests SSDs/NVMes, but without this information a huge piece of
the puzzle is missing. 
Exact models please.

Since you have 2 nodes, I presume you changed replication from 3 to 2.

This will give you better results, but you want to use 3 in real-life, so
your test results will be flawed, keep that in mind.

> > On a ceph-client i have configured a cephfs, mount kernelspace. The
> > client is also connected with a 10 Gbps Link.
> >
With kernelspace and w/o specifying direct writes in dd most of 64GB of
your client will be used as pagecache.

> > All 3 use debian
> >
> > 4.5.5 kernel
> >
> > 64 GB mem
> >
> > There is no special configuration.
> >
> >
> >
> > Now the question:
> >
> > When i use the dd (~11GB) command in the cephfs mount, i get a result
> > of 3 GB/s
> >
3GB/s is already 33% faster than then network, so you're seeing caching as
noted above. 
The most sustainable speed you'd be able to achieve in your setup would be
1GB/s, but that's overly simplistic and optimistic.

Also at this time I'd like to add my usual comment that in over 90% of all
use cases speed as in bandwidth is a distant second to the much more
important speed in terms of IOPS.

> >
> >
> > dd if=/dev/zero of=/cephtest/test bs=1M count=10240
> >
> >
> >
> > Is it possble to transfer the data faster (use full capacity oft he
> > network) and cache it with the memory?
> >
Again, according to your numbers and description that's already happening.

Note that RAM on the storage serves will NOT help with write speeds, it
will be helpful for reads and a large SLAB can prevent unnecessary disk
accesses. 

Christian
> >
> >
> > Thanks,
> >
> > Marcus Strasser
> >
> >
> >
> >
> >
> > Marcus Strasser
> >
> > Linux Systeme
> >
> > Russmedia IT GmbH
> >
> > A-6850 Schwarzach, Gutenbergstr. 1
> >
> >
> >
> > T +43 5572 501-872
> >
> > F +43 5572 501-97872
> >
> > marcus.strasser@xxxxxxxxxxxxxxxx
> >
> > highspeed.vol.at
> >
> >
> >
> >
> 
> 
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux