Re: Reef: RGW Multisite object fetch limits

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Correct. Nagle's algorithm has been disabled on RGWs and LBs but not
seeing any significant difference, and also looks like RGWs don't
support HTTP/2. In addition to this, we're also being affected by [1].

[1] https://tracker.ceph.com/issues/64999

Regards,
Jayanth

On Thu, May 16, 2024 at 2:40 PM Janne Johansson <icepic.dz@xxxxxxxxx> wrote:

> Den tors 16 maj 2024 kl 07:47 skrev Jayanth Reddy <
> jayanthreddy5666@xxxxxxxxx>:
> >
> > Hello Community,
> > In addition, we've 3+ Gbps links and the average object size is 200
> > kilobytes. So the utilization is about 300 Mbps to ~ 1.8 Gbps and not
> more
> > than that.
> > We seem to saturate the link when the secondary zone fetches bigger
> objects
> > sometimes but the objects per second always seem to be 1k to 1.5k per
> > second.
>
> Is it possible that the small object sizes makes it impossible for the
> replication to get any decent speed?
>
> If it makes a new tcp connection for every S3 object, then
> round-trip-times and the small sizes would make it impossible to get
> up to decent speed over the network before the object is finished, and
> then it restarts again with a new object with a new slow start and so
> on.
>
> --
> May the most significant bit of your life be positive.
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux