Re: Ceph bucket notification events stop working

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


Hi Daniel,
I assume you are using persistent topics?
We had a bug that we recently fixed, where the queue of a persistent
notification was not deleted when the deletion was done from radosgw-admin.
However, there are no plans to backport that to pacific.
Regardless, the behavior that you are describing does not seem related to
that. Even if the deletion was not done, creating a new topic to a new
endpoint should still work. So, it would be great if you can open a tracker
or/and send the exact details on how to reproduce the issue.



On Tue, Aug 8, 2023 at 10:02 AM <daniel.yordanov1@xxxxxxxxxxxx> wrote:

> Hello,
> We started to use the Ceph bucket notification events with subscription to
> an HTTP endpoint.
> We encountered an issue when the receiver endpoint was changed. Which
> means the events from Ceph weren't consumed. We deleted the bucket
> notifications and the topic, and created a new topic with the new endpoint
> and new bucket notifications.
> (We are using the REST api to create bucket notifications and topics. We
> also used the CLI commands, but there we found out that deleting a topic
> doesn't delete the notifications that are subscribed to it.  Ceph version
> is Pacific.)
> From that moment we didn't receive any more notification events to our new
> endpoint.
> We tried many times to create new topics and new bucket notifications, but
> we don't receive anymore events to our endpoint.
> We suspect that the notification queues don't get fully cleaned and they
> stay in some broken state.
> We have been able to reproduce this locally and the only solution was to
> wipe all the containers and recreate them. The problem is that this issue
> is on a staging environment where we cannot destroy everything.
> We are looking for a solution or a command to clean the notification
> queues, to be able to start anew.
> We also are looking  for a way to know programatically if the
> notifications broke and have a way to automatically recover as such a flaw
> is critical for our application.
> Thanks for your time!
> Daniel Yordanov
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]

  Powered by Linux