Re: ceph, ssds, hdds, journals and caching

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 4 Oct 2014 11:16:05 +0100 (BST) Andrei Mikhailovsky wrote:

> > While I doubt you're hitting any particular bottlenecks on your
> > storage
> > servers I don't think Zabbix (very limited experience with it so I
> > might
> > be wrong) monitors everything, nor does it so at sufficiently high
> > freqency to show what is going on during a peak or fio test from a
> > client.
> > Thus my suggestion to stare at it live with atop (on all nodes).
> 
> I will give it a go and see what happens during benchmarks. The Atop is
> rather informative indeed! There is a zabbix plugin/template for ceph,
> which gives a good overview of the ceph cluster. It does not provide the
> level of details that you would get from an admin socket, but rather an
> overview of the cluster thhroughput and io rates as well as PGs status. 
> 
Yeah, Nagios has that as well, but for performance testing and
troubleshooting that isn't enough.

> > > My biggest concern is the single
> > > thread performance of vms. From what I can see, this is the main
> > > downside of ceph. On average, I am not getting much over 35-40MB/s
> > > per
> > > thread in cold data reads. This is compared with a single hdd read
> > > performance of 150-160MB/s. Having about 1/4 of the raw device
> > > performance is a bit worring, especially compared with what i've
> > > read. I
> > > should be getting about 1/2 of the raw drive performance for a
> > > single
> > > thread, but I am not. My hope was with caching tier I can increase
> > > it.
> > >
> > Have a look at:
> > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-April/028552.html
> 
> > Your numbers look very much like mine before increasing the
> > read_ahead
> > buffer.
> 
> How much in performance did you gain by setting the read_ahead values?
> The performance figures that I get are using the following udev rules: 
> 
The settings below look like you're applying them on storage nodes.

Read the above link again, carefully. ^o^
In in it I state that:
a) despite reading such in old posts, setting read_ahead on the OSD nodes
has no or even negative effects. Inside the VM, it is very helpful:

b) the read speed increased about 10 times, from 35MB/s to 380MB/s

Regards,

Christian
> # set read_ahead values 
> ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1",
> ATTR{queue/read_ahead_kb}="2048" ACTION=="add|change",
> KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1",
> ATTR{queue/nr_requests}="2048" # set deadline scheduler for non-rotating
> disks ACTION=="add|change", KERNEL=="sd[a-z]",
> ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="noop" # # set cfq
> scheduler for rotating disks ACTION=="add|change", KERNEL=="sd[a-z]",
> ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="cfq" 
> 
> Is there anything else that I am missing? 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux