Re: How can I monitor current ceph operation at cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> nick
> Sent: 11 April 2016 08:26
> To: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  How can I monitor current ceph operation at
> cluster
> 
> Hi,
> > We're parsing the output of 'ceph daemon osd.N perf dump' for the
> > admin sockets in /var/run/ceph/ceph-osd.*.asok on each node in our
> cluster.
> > We then push that data into carbon-cache/graphite and using grafana
> > for visualization.
> which of those values are you using for monitoring? I can see a lot of
> numbers when doing a 'ceph daemon osd.N perf dump'. Do you know if
> there is some documentation what each value means? I could only find:
> http://docs.ceph.com/docs/hammer/dev/perf_counters/ which describes
> the schema.

I'm currently going through them and trying to write a short doc explaining
what each one measures. Are you just interested in total number of read and
write ops over the whole cluster?


> 
> Best Regards
> Nick
> 
> > Our numbers are much more consistent than yours appear.
> >
> > Bob
> >
> > On Thu, Apr 7, 2016 at 2:34 AM, David Riedl <david.riedl@xxxxxxxxxxx>
> wrote:
> > > Hi.
> > >
> > > I use this for my zabbix environment:
> > >
> > > https://github.com/thelan/ceph-zabbix/
> > >
> > > It works really well for me.
> > >
> > >
> > > Regards
> > >
> > > David
> > >
> > > On 07.04.2016 11:20, Nick Fisk wrote:
> > >   Hi.
> > >
> > > I have small question about monitoring performance at ceph cluster.
> > >
> > > We have cluster with 5 nodes and 8 drives on each node, and 5
> > > monitor on every node. For monitoring cluster we use zabbix. It
> > > asked every node for
> > >
> > > 30
> > >
> > > second about current ceph operation and get different result from
> > > every node.
> > > first node:     350op/s
> > > second node: 900op/s
> > > third node:     200ops/s
> > > fourth node:   700op/s
> > > fifth node:         1200ops/
> > >
> > > I don't understand how I can receive the total value of performance
> > > ceph cluster?
> > >
> > > Easy Answer
> > > Capture and parse the output from "ceph -s", not 100% accurate, but
> > > probably good enough for a graph
> > >
> > > Complex Answer
> > > Use something like Graphite to capture all the counters for every
> > > OSD and then use something like sumSeries to add all the op/s counters
> together.
> > >
> > >
> > >
> > >
> > > _______________________________________________
> > > ceph-users mailing
> > > listceph-users@xxxxxxxxxx.comhttp://lists.ceph.com/listinfo.cgi/ceph
> > > -user
> > > s-ceph.com
> > >
> > > _______________________________________________
> > > ceph-users mailing
> > > listceph-users@xxxxxxxxxx.comhttp://lists.ceph.com/listinfo.cgi/ceph
> > > -user
> > > s-ceph.com
> > >
> > >
> > > --
> > > Mit freundlichen Grüßen
> > >
> > > David Riedl
> > >
> > >
> > >
> > > WINGcon GmbH Wireless New Generation - Consulting & Solutions
> > >
> > > Phone: +49 (0) 7543 9661 - 26
> > > E-Mail: david.riedl@xxxxxxxxxxx
> > > Web: http://www.wingcon.com
> > >
> > > Sitz der Gesellschaft: Langenargen
> > > Registergericht: ULM, HRB 632019
> > > USt-Id.: DE232931635, WEEE-Id.: DE74015979
> > > Geschäftsführer: Thomas Ehrle, Fritz R. Paul
> > >
> > >
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users@xxxxxxxxxxxxxx
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> --
> Sebastian Nickel
> Nine Internet Solutions AG, Albisriederstr. 243a, CH-8047 Zuerich Tel +41
44
> 637 40 00 | Support +41 44 637 40 40 | www.nine.ch

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux