Hello Chris, On 6/8/22 03:16, Chris Lew wrote: > This series proposes an implementation for the rpmsg framework to do > deferred cleanup of buffers provided in the rx callback. The current > implementation assumes that the client is done with the buffer after > returning from the rx callback. > > In some cases where the data size is large, the client may want to > avoid copying the data in the rx callback for later processing. This > series proposes two new facilities for signaling that they want to > hold on to a buffer after the rx callback. > They are: > - New API rpmsg_rx_done() to tell the rpmsg framework the client is > done with the buffer > - New return codes for the rx callback to signal that the client will > hold onto a buffer and later call rpmsg_rx_done() > > This series implements the qcom_glink_native backend for these new > facilities. The API you proposed seems to me quite smart and adaptable to the rpmsg virtio backend. My main concern is about the release of the buffer when the endpoint is destroyed. Does the buffer release should be handled by each services or by the core? I wonder if the buffer list could be managed by the core part by adding the list in the rpmsg_endpoint structure. On destroy the core could call the rx_done for each remaining buffers in list... I let Bjorn and Mathieu advise on this... Thanks, Arnaud > > Chris Lew (4): > rpmsg: core: Add rx done hooks > rpmsg: char: Add support to use rpmsg_rx_done > rpmsg: glink: Try to send rx done in irq > rpmsg: glink: Add support for rpmsg_rx_done > > drivers/rpmsg/qcom_glink_native.c | 112 ++++++++++++++++++++++++++++++-------- > drivers/rpmsg/rpmsg_char.c | 50 ++++++++++++++++- > drivers/rpmsg/rpmsg_core.c | 20 +++++++ > drivers/rpmsg/rpmsg_internal.h | 1 + > include/linux/rpmsg.h | 24 ++++++++ > 5 files changed, 183 insertions(+), 24 deletions(-) >