On Fri, Oct 04, 2024 at 04:32:04PM +0100, Pavel Begunkov wrote: > On 9/12/24 11:49, Ming Lei wrote: > ... ... > > @@ -473,6 +494,7 @@ enum { > > REQ_F_BUFFERS_COMMIT_BIT, > > REQ_F_SQE_GROUP_LEADER_BIT, > > REQ_F_SQE_GROUP_DEP_BIT, > > + REQ_F_GROUP_KBUF_BIT, > > /* not a real bit, just to check we're not overflowing the space */ > > __REQ_F_LAST_BIT, > > @@ -557,6 +579,8 @@ enum { > > REQ_F_SQE_GROUP_LEADER = IO_REQ_FLAG(REQ_F_SQE_GROUP_LEADER_BIT), > > /* sqe group with members depending on leader */ > > REQ_F_SQE_GROUP_DEP = IO_REQ_FLAG(REQ_F_SQE_GROUP_DEP_BIT), > > + /* group lead provides kbuf for members, set for both lead and member */ > > + REQ_F_GROUP_KBUF = IO_REQ_FLAG(REQ_F_GROUP_KBUF_BIT), > > We have a huge flag problem here. It's a 4th group flag, that gives > me an idea that it's overabused. We're adding state machines based on > them "set group, clear group, but if last set it again. And clear > group lead if refs are of particular value". And it's not really > clear what these two flags are here for or what they do. > > From what I see you need here just one flag to mark requests > that provide a buffer, ala REQ_F_PROVIDING_KBUF. On the import > side: > > if ((req->flags & GROUP) && (req->lead->flags & REQ_F_PROVIDING_KBUF)) > ... > > And when you kill the request: > > if (req->flags & REQ_F_PROVIDING_KBUF) > io_group_kbuf_drop(); REQ_F_PROVIDING_KBUF may be killed too, and the check helper can become: bool io_use_group_provided_buf(req) { return (req->flags & GROUP) && req->lead->grp_buf; } Thanks, Ming