Re: RADOSGW Multi-Site Sync Metrics

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


Hi Rhys,

I'm working on exposing metrics to Prometheus that would be sync deltas for
data log shards.

Specifically, the value of the metric would be the sync delta for a data
log shard, with labels for the shard number, the source zone, and the
destination zone.

This project is something I'm currently working on right now, would merge
to main and potentially be part of Squid. As this project moves on I'm
happy to provide more details and updates to you on this thread.


On Thu, Feb 1, 2024 at 4:24 PM Rhys Powell <RPowell@xxxxxxxxxxxxxxxx> wrote:

> Hi All,
> I am in the process of implementing multi-site RGW instance and have
> successfully set up a POC and confirmed the functionality.
> I am working on metrics and alerting for this service, and I am not seeing
> metrics available for the output shown by
> radosgw-admin sync status --rgw-realm=<<realm-name>>
> Sample output:
> [@cepha-cn02 ~]# radosgw-admin sync status --rgw-realm=<<realm-name>>
>           realm a207b396-8d1b-408b-851e-10ad545861b7 (realm-name)
>       zonegroup 77e8924b-05e3-4d86-b887-aedd7fe5306c (zonegroup-name)
>            zone a26c27b2-d6ac-4eab-a4ce-1036ce2d37dc (zone-name)
>   metadata sync syncing
>                 full sync: 0/64 shards
>                 incremental sync: 64/64 shards
>                 metadata is caught up with master
>       data sync source: 8c7d69db-85ae-45f4-b4ec-f712fad4af07 (zone-name)
>                         syncing
>                         full sync: 0/128 shards
>                         incremental sync: 128/128 shards
>                         data is caught up with source
> I'd like to measure, track, and alert on shard status during sync
> operations.
> Is there a way to expose these metrics? I'm struggling to find guidance or
> details.
> Thanks in advance
> Rhys
> Rhys Powell (He/Him)
> KORE<> | Senior Systems Engineer
> (m)
> rpowell@xxxxxxxxxxxxxxxx<mailto:rpowell@xxxxxxxxxxxxxxxx>
> LinkedIn<> | Twitter<
>>| Instagram<
> Disclaimer
> The information contained in this communication from the sender is
> confidential. It is intended solely for use by the recipient and others
> authorized to receive it. If you are not the recipient, you are hereby
> notified that any disclosure, copying, distribution or taking action in
> relation of the contents of this information is strictly prohibited and may
> be unlawful.
> This email has been scanned for viruses and malware, and may have been
> automatically archived by Mimecast Ltd, an innovator in Software as a
> Service (SaaS) for business. Providing a safer and more useful place for
> your human generated data. Specializing in; Security, archiving and
> compliance. To find out more visit the Mimecast website.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]

  Powered by Linux