Re: Ceph Monitoring

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Marius Vaitiekunas, Chris Jones,

Thank you for your contributions.
I was looking for this information.

I'm starting to use Ceph, and my concern is about monitoring.

Do you have any scripts for this monitoring?
If you can help me. I will be very grateful to you.

(Excuse me if there is misinterpretation)

Best Regards,
André Forigato 

----- Mensagem original -----
> De: "Marius Vaitiekunas" <mariusvaitiekunas@xxxxxxxxx>
> Para: "Chris Jones" <cjones@xxxxxxxxxxx>, ceph-users@xxxxxxxx
> Enviadas: Domingo, 15 de janeiro de 2017 19:26:05
> Assunto: Re:  Ceph Monitoring

> On Fri, 13 Jan 2017 at 22:15, Chris Jones < cjones@xxxxxxxxxxx > wrote:

>> General question/survey:

>> Those that have larger clusters, how are you doing alerting/monitoring? Meaning,
>> do you trigger off of 'HEALTH_WARN', etc? Not really talking about collectd
>> related but more on initial alerts of an issue or potential issue? What
>> threshold do you use basically? Just trying to get a pulse of what others are
>> doing.

>> Thanks in advance.

>> --
>> Best Regards,
>> Chris Jones
>> Bloomberg

>> Hi,

>> We monitor for 'low iops'. The number differs on our clusters. For example if we
>> have only 3000 iops per second, there is something wrong going on.

>> Another good check is for s3 api. We try to read an object from s3 api every 30
>> seconds.

>> Also we have many checks like more than 10% osds are down, pg inactive, cluster
>> has degradated capacity and similiar. Some of these checks are not critical and
>> we get only emails.

>> One more important thing is disk latency monitoring. We've had huge slowdowns on
>> our cluster when journalling ssd disks wear out. It's quite hard to understand
>> what's going on, because all osds are up and running, but cluster is not
>> performing at all.

>> Network.errors on interfaces could be important. We had some issues, when
>> physical cable was mulfunctioning and cluster had many blocks.

> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux