On Fri, 2023-12-15 at 08:20 +0800, xiubli@xxxxxxxxxx wrote: > From: Xiubo Li <xiubli@xxxxxxxxxx> > > Once this happens that means there have bugs. > > URL: https://tracker.ceph.com/issues/63586 > Signed-off-by: Xiubo Li <xiubli@xxxxxxxxxx> > --- > net/ceph/osd_client.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c > index 5753036d1957..848ef19055a0 100644 > --- a/net/ceph/osd_client.c > +++ b/net/ceph/osd_client.c > @@ -5912,10 +5912,12 @@ static int osd_sparse_read(struct ceph_connection *con, > fallthrough; > case CEPH_SPARSE_READ_DATA: > if (sr->sr_index >= count) { > - if (sr->sr_datalen && count) > + if (sr->sr_datalen) { > pr_warn_ratelimited("sr_datalen %u sr_index %d count %u\n", > sr->sr_datalen, sr->sr_index, > count); > + return -EREMOTEIO; > + } > > sr->sr_state = CEPH_SPARSE_READ_HDR; > goto next_op; Do you really need to fail the read in this case? Would it not be better to just advance past the extra junk? Or is this problem more indicative of a malformed frame? It'd be nice to have some specific explanation of the problem this is fixing and how it was triggered. If this due to a misbehaving server? Bad hw? -- Jeff Layton <jlayton@xxxxxxxxxx>