On 2019/12/20 21:37, Jason Gunthorpe wrote: > On Fri, Dec 20, 2019 at 06:12:51PM +0800, Liuyixian (Eason) wrote: >> >> >> On 2019/12/18 22:00, Jason Gunthorpe wrote: >>> On Mon, Dec 16, 2019 at 08:51:03PM +0800, Liuyixian (Eason) wrote: >>>> Hi Jason, >>>> >>>> I want to make sure that is there any further comments on this patch set? >>> >>> I still dislike it alot. >>> >> >> Hi Jason, >> >> Thanks for reply :) >> Let us recall the previous discussions and check is there anything is pending on. >> >> Comments in patch set V1: >> 1: why do you need a dedicated HIGHPRI work queue? >> 2. As far as I could tell the only thing the triggered the work to run >> was some variable which was only set in another work queue 'hns_roce_irq_work_handle()' >> 3. don't do allocations at all if you can't allow them to fail. >> >> Comments in patch set V2: >> 1. It kind of looks like this can be called multiple times? It won't work >> right unless it is called exactly once. >> 2. Why do you need more than one work in parallel for this? Once you >> start to move the HW to error that only has to happen once, surely? >> Flush operation should be implemented once the QP state is going to >> be error or the producer index is updated for the QP in error state. >> 3. The work function does something that looks like it only has to happen >> once per QP. One do you need to keep re-queing this thing every time the user posts >> a WR? >> wqe every time the PI is updated. That's re-queuing is needed. >> >> In conclusion, we have accepted all the comments in V1, and explained to all 3 comments in V2 >> but no further response. Do I have missed to cover any of your concerns? >> >> I would really appreciate it if you could help to point out the unsolved issue. > > You explained you need to push the latest PI to the HW, so I still > don't understand why it needs to be coded in such a confusing > > Keep a note if the PI is being pushed, if not, start pushing it. If > yes, update that PI that will be pushed and do nothing > > Jason Good suggestion! I will fix it in v4. Thanks.