Hi, Thanks for the quick response, I have the notifications going to a http endpoint running on one of the RGW's machines so the latency is as low as I can make it for both methods. If the limiting factor is at the rados layer my only tuning options are to put the rgw log pool on the fastest media i have available ? On Thu, 24 Nov 2022 at 13:37, Yuval Lifshitz <ylifshit@xxxxxxxxxx> wrote: > Hi Steven, > When using synchronous (=non-persistent) notifications, the overall rate > is dependent on the latency between the RGW and the endpoint to which you > are sending the notifications. The protocols for sending the > notifications (kafka/amqp) are using batches and are usually very > efficient. However, if there is latency, this would slow down the RGW. > > When using asynchronous (=persistent) notifications, they are written to a > RADOS-backed queue by the RGW that received the request, and then pulled > from that queue by some other (or sometimes the same) RGW that sends the > notifications to the endpoint. > Pulling from the queue, and sending the notifications is usually very > fast, however, writing to the notification queue is a RADOS operation. The > amount of information written to the queue is usually small but still has > the RADOS overhead, as the notifications are written one-by-one. So, in > this case, the limiting factor would be the RADOS IOpS. > > Please let me know if this clarifies the behavior you observe? > > Yuval > > On Thu, Nov 24, 2022 at 1:27 PM Steven Goodliff <sgoodliff@xxxxxxxxx> > wrote: > >> Hi, >> >> I'm really struggling with persistent bucket notifications running 17.2.3. >> I can't get much more than 600 notifications a second but when changing to >> async then i see higher rates using the following metric >> >> sum(rate(ceph_rgw_pubsub_push_ok[$__rate_interval])) >> >> I believe this is mainly down to being throttled by using 1 rgw rather >> than >> all the rgw's the async method allows. >> >> We would prefer to use persistent but can't get the throughput we need, >> any >> suggestions would be much appreciated. >> >> Thanks >> >> Steven >> _______________________________________________ >> ceph-users mailing list -- ceph-users@xxxxxxx >> To unsubscribe send an email to ceph-users-leave@xxxxxxx >> >> _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx