Re: [PATCH v13 4/4] bus: mhi: Add userspace client interface driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 1 Dec 2020 at 18:51, Jeffrey Hugo <jhugo@xxxxxxxxxxxxxx> wrote:
>
> On 12/1/2020 10:52 AM, Loic Poulain wrote:
> > On Tue, 1 Dec 2020 at 18:37, Jeffrey Hugo <jhugo@xxxxxxxxxxxxxx> wrote:
> >>
> >> On 12/1/2020 10:36 AM, Loic Poulain wrote:
> >>> On Tue, 1 Dec 2020 at 02:16, Hemant Kumar <hemantk@xxxxxxxxxxxxxx> wrote:
> >>>>
> >>>> Hi Loic,
> >>>>
> >>>> On 11/30/20 10:22 AM, Loic Poulain wrote:
> >>>>> On Sat, 28 Nov 2020 at 04:26, Hemant Kumar <hemantk@xxxxxxxxxxxxxx> wrote:
> >>>>>>
> >>>>>> This MHI client driver allows userspace clients to transfer
> >>>>>> raw data between MHI device and host using standard file operations.
> >>>>>> Driver instantiates UCI device object which is associated to device
> >>>>>> file node. UCI device object instantiates UCI channel object when device
> >>>>>> file node is opened. UCI channel object is used to manage MHI channels
> >>>>>> by calling MHI core APIs for read and write operations. MHI channels
> >>>>>> are started as part of device open(). MHI channels remain in start
> >>>>>> state until last release() is called on UCI device file node. Device
> >>>>>> file node is created with format
> >>>>>
> >>>>> [...]
> >>>>>
> >>>>>> +struct uci_chan {
> >>>>>> +       struct uci_dev *udev;
> >>>>>> +       wait_queue_head_t ul_wq;
> >>>>>> +
> >>>>>> +       /* ul channel lock to synchronize multiple writes */
> >>>>>> +       struct mutex write_lock;
> >>>>>> +
> >>>>>> +       wait_queue_head_t dl_wq;
> >>>>>> +
> >>>>>> +       /* dl channel lock to synchronize multiple reads */
> >>>>>> +       struct mutex read_lock;
> >>>>>> +
> >>>>>> +       /*
> >>>>>> +        * protects pending list in bh context, channel release, read and
> >>>>>> +        * poll
> >>>>>> +        */
> >>>>>> +       spinlock_t dl_pending_lock;
> >>>>>> +
> >>>>>> +       struct list_head dl_pending;
> >>>>>> +       struct uci_buf *cur_buf;
> >>>>>> +       size_t dl_size;
> >>>>>> +       struct kref ref_count;
> >>>>>> +};
> >>>>>
> >>>>> [...]
> >>>>>
> >>>>>> + * struct uci_dev - MHI UCI device
> >>>>>> + * @minor: UCI device node minor number
> >>>>>> + * @mhi_dev: associated mhi device object
> >>>>>> + * @uchan: UCI uplink and downlink channel object
> >>>>>> + * @mtu: max TRE buffer length
> >>>>>> + * @enabled: Flag to track the state of the UCI device
> >>>>>> + * @lock: mutex lock to manage uchan object
> >>>>>> + * @ref_count: uci_dev reference count
> >>>>>> + */
> >>>>>> +struct uci_dev {
> >>>>>> +       unsigned int minor;
> >>>>>> +       struct mhi_device *mhi_dev;
> >>>>>> +       struct uci_chan *uchan;
> >>>>>
> >>>>> Why a pointer to uci_chan and not just plainly integrating the
> >>>>> structure here, AFAIU uci_chan describes the channels and is just a
> >>>>> subpart of uci_dev. That would reduce the number of dynamic
> >>>>> allocations you manage and the extra kref. do you even need a separate
> >>>>> structure for this?
> >>>>
> >>>> This goes back to one of my patch versions i tried to address concern
> >>>> from Greg. Since we need to ref count the channel as well as the uci
> >>>> device i decoupled the two objects and used two reference counts for two
> >>>> different objects.
> >>>
> >>> What Greg complained about is the two kref in the same structure and
> >>> that you were using kref as an open() counter. But splitting your
> >>> struct in two in order to keep the two kref does not make the much
> >>> code better (and simpler). I'm still a bit puzzled about the driver
> >>> complexity, it's supposed to be just a passthrough interface to MHI
> >>> after all.
> >>>
> >>> I would suggest several changes, that IMHO would simplify reviewing:
> >>> - Use only one structure representing the 'uci' context (uci_dev)
> >>> - Keep the read path simple (mhi_uci_read), do no use an intermediate
> >>> cur_buf pointer, only dequeue the buffer when it is fully consumed.
> >>> - As I commented before, take care of the dl_pending list access
> >>> concurrency, even in wait_event.
> >>> - You don't need to count the number of open() calls, AFAIK,
> >>> mhi_prepare_for_transfer() simply fails if channels are already
> >>> started...
> >>
> >> Unless I missed something, you seem to have ignored the root issue that
> >> Hemant needs to solve, which is when to call
> >> mhi_unprepare_for_transfer().  You can't just call that when close() is
> >> called because there might be multiple users, and each one is going to
> >> trigger a close(), so you need to know how many close() instances to
> >> expect, and only call mhi_unprepare_for_transfer() for the last one.
> >
> > That one part of his problem, yes, but if you unconditionally call
> > mhi_prepare_for_transfer in open(), it should fail for subsequent
> > users, and so only one user will successfully open the device.
>
> I'm pretty sure that falls under "trying to prevent users from opening a
> device" which Greg gave a pretty strong NACK to.  So, that's not a solution.

That would deserve clarification since other drivers like
virtio_console clearly prevent that (guest_connected).

>
> So, the complete problem is -
>
> N users need to be able to use the device (and by proxy, the channel)
> concurrently, but prepare needs to be called on the first user coming
> into the picture, and unprepare needs to be called on the last user
> exiting the picture.
>
> Hemant has supported this usecase in every rev of this series I've
> looked at, but apparently every solution he has proposed to handle this
> has caused confusion.

Understood, but that can be done with a simple counter in that case.

Regards,
Loic



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [Linux for Sparc]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux