On Mon, 2023-11-06 at 09:03 +0800, xiubli@xxxxxxxxxx wrote: > From: Xiubo Li <xiubli@xxxxxxxxxx> > > There is no any limit for the extent array size and it's possible > that we will hit 4096 limit just after a lot of random writes to > a file and then read with a large size. In this case the messager > will fail by reseting the connection and keeps resending the inflight > IOs infinitely. > > Just increase the limit to a larger number and then warn it to > let user know that allocating memory could fail with this. > > URL: https://tracker.ceph.com/issues/62081 > Signed-off-by: Xiubo Li <xiubli@xxxxxxxxxx> > --- > > V2: > - Increase the MAX_EXTENTS instead of removing it. > - Do not return an errno when hit the limit. > > > net/ceph/osd_client.c | 15 +++++++-------- > 1 file changed, 7 insertions(+), 8 deletions(-) > > diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c > index c03d48bd3aff..050dc39065fb 100644 > --- a/net/ceph/osd_client.c > +++ b/net/ceph/osd_client.c > @@ -5850,7 +5850,7 @@ static inline void convert_extent_map(struct ceph_sparse_read *sr) > } > #endif > > -#define MAX_EXTENTS 4096 > +#define MAX_EXTENTS (16*1024*1024) > > static int osd_sparse_read(struct ceph_connection *con, > struct ceph_msg_data_cursor *cursor, > @@ -5883,14 +5883,13 @@ static int osd_sparse_read(struct ceph_connection *con, > if (count > 0) { > if (!sr->sr_extent || count > sr->sr_ext_len) { > /* > - * Apply a hard cap to the number of extents. > - * If we have more, assume something is wrong. > + * Warn if hits a hard cap to the number of extents. > + * Too many extents could make the following > + * kmalloc_array() fail. > */ > - if (count > MAX_EXTENTS) { > - dout("%s: OSD returned 0x%x extents in a single reply!\n", > - __func__, count); > - return -EREMOTEIO; > - } > + if (count > MAX_EXTENTS) > + pr_warn_ratelimited("%s: OSD returned 0x%x extents in a single reply!\n", > + __func__, count); > > /* no extent array provided, or too short */ > kfree(sr->sr_extent); Looks reasonable. Reviewed-by: Jeff Layton <jlayton@xxxxxxxxxx>