Fwd: Ideas on the UI/UX improvement of ceph-mgr: Cluster Status Dashboard

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



---------- Forwarded message ----------
From: saumay agrawal <saumay.agrawal@xxxxxxxxx>
Date: Wed, Aug 23, 2017 at 11:35 AM
Subject: Re: Ideas on the UI/UX improvement of ceph-mgr: Cluster
Status Dashboard
To: nagarrajan raghunathan <nagu.raghu99@xxxxxxxxx>,
ceph-users@xxxxxxxx, Ceph Development <ceph-devel@xxxxxxxxxxxxxxx>


Hi Nagarrajan,

For graph prototypes you can point your browser to
localhost:41000/perf_graph_prototypes/{perf counter}. In place of perf
counter you can pass osd.op_latency, osd.loadavg, etc. You can find
more of the perf counters at localhost:41000/get_perf_schema/ under
the osd objects. You will be able to find the perf counter and its
description there. These graphs work for the perf counters which give
a time sequence of values.

Also you can view the summary of how these graphs work, and how to
access them, along with their sample snapshots, in the comments of PR
https://github.com/ceph/ceph/pull/16621.

Regards,
Saumay.

On Aug 22, 2017 11:57 PM, "nagarrajan raghunathan"
<nagu.raghu99@xxxxxxxxx> wrote:

Hi Saumay,
            Could you please tell how to use this tool. For example
say if i have ceph cluster running how do monitor using this tool. Any
guideline would be great.

On Tue, Aug 22, 2017 at 5:15 AM, saumay agrawal
<saumay.agrawal@xxxxxxxxx> wrote:
>
> Hi,
>
> As a part of my project, I have been working on the visualisation of
> the OSD performance on the dashboard. Based on the community feedback
> I realised that the visualisation of perf counter values against the
> first few stdevs was the most needed feature for performance graphs,
> along with the visualisation of minimum and maximum values.
>
> For this, I have created a generalised prototype page, which shows the
> prototypes of various graphs for a given performance counter. I also
> added a separate page to the dashboard, which visualises the read and
> write latency distribution of a ceph cluster.
>
> As of now, this is a PR at https://github.com/ceph/ceph/pull/16621.
> Any suggestions are welcome.
>
> Thanks,
> Saumay
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html




-- 
Regards,
Nagarrajan Raghunathan
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux