Re: Performance questions (how original, I know)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Mark,

On Thu, 19 Dec 2013 17:18:01 -0600 Mark Nelson wrote:

> On 12/16/2013 02:42 AM, Christian Balzer wrote:
> >
> > Hello,
> 
> Hi Christian!
> 
> >
> > new to Ceph, not new to replicated storage.
> > Simple test cluster with 2 identical nodes running Debian Jessie, thus
> > ceph 0.48. And yes, I very much prefer a distro supported package.
> 
> I know you'd like to use the distro package, but 0.48 is positively 
> ancient at this point.  There's been a *lot* of fixes/changes since 
> then.  If it makes you feel better, our current professionally supported 
> release is based on dumpling.
> 
Oh well, I assume 0.48 was picked due to the "long term support" title
(and thus one would hope it received it steady stream of backported fixes
at least ^o^).
There is 0.72 is unstable, so for testing I will just push that test
cluster to sid and see what happens. 
As well as poke the Debian maintainer for a wheezy backport if possible,
if not I'll use the source package to roll my own binary packages.  


[snip]
> >
> > Aside from that cringeworthy drop to 15% of the backing storage speed
> > (and network link) which I presume might be salvageable by using a SSD
> > journal I'm more than puzzled by the read speed.
> > For starters I would have assumed that in this 2 replica setup all
> > data is present on the local node a and Ceph would be smart enough to
> > get it all locally. But even if it was talking to both nodes a and b
> > (or just b) I would have expected something in the 100MB/s range.
> 
> Ceph typically always reads data from the primary OSD, so wherever the 
> primary is located, that's where it will read from.  The good news is 
> that this gives you a better probability of spreading yours reads out 
> over the whole cluster. The bad news is that you have more network 
> traffic to deal with.
> 
I assumed as much, just was expecting something closer to link speeds with
reads on an otherwise quiet cluster and network.

> >
> > Any insights would be much appreciated.
> 
> With 0.48 it's kind of tough to make any recommendations because I 
> frankly don't remember exactly everything that's changed since then. 
> You'll probably want to make sure that syncfs is being used, and you 
> probably will want to play around with enabling/disabling the filestore 
> flusher and maybe turning journal aio on.  Looks like RBD cache was 
> included in 0.46, so you can try enabling that, but it had performance 
> issues with sequential writes before cuttlefish.
> 
> At least you'll be on a relatively modern kernel!
> 
As I said above, will try with Emperor and even with wheezy I tend to roll
my own, more up to date kernels.

Thanks, 

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux