Re: [PATCH v4 1/3] bus: mhi: core: Introduce internal register poll helper function

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 18 Mar 2021 at 17:13, Jeffrey Hugo <jhugo@xxxxxxxxxxxxxx> wrote:
>
> On 3/17/2021 3:26 PM, Bhaumik Bhatt wrote:
> > On 2021-03-11 11:59 AM, Jeffrey Hugo wrote:
> >> On 3/11/2021 1:00 AM, Loic Poulain wrote:
> >>> Hi Bhaumik,
> >>>
> >>> On Thu, 11 Mar 2021 at 00:31, Bhaumik Bhatt <bbhatt@xxxxxxxxxxxxxx>
> >>> wrote:
> >>>>
> >>>> Introduce helper function to allow MHI core driver to poll for
> >>>> a value in a register field. This helps reach a common path to
> >>>> read and poll register values along with a retry time interval.
> >>>>
> >>>> Signed-off-by: Bhaumik Bhatt <bbhatt@xxxxxxxxxxxxxx>
> >>>> ---
> >>>>   drivers/bus/mhi/core/internal.h |  3 +++
> >>>>   drivers/bus/mhi/core/main.c     | 23 +++++++++++++++++++++++
> >>>>   2 files changed, 26 insertions(+)
> >>>>
> >>>> diff --git a/drivers/bus/mhi/core/internal.h
> >>>> b/drivers/bus/mhi/core/internal.h
> >>>> index 6f80ec3..005286b 100644
> >>>> --- a/drivers/bus/mhi/core/internal.h
> >>>> +++ b/drivers/bus/mhi/core/internal.h
> >>>> @@ -643,6 +643,9 @@ int __must_check mhi_read_reg(struct
> >>>> mhi_controller *mhi_cntrl,
> >>>>   int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
> >>>>                                      void __iomem *base, u32 offset,
> >>>> u32 mask,
> >>>>                                      u32 shift, u32 *out);
> >>>> +int __must_check mhi_poll_reg_field(struct mhi_controller *mhi_cntrl,
> >>>> +                                   void __iomem *base, u32 offset,
> >>>> u32 mask,
> >>>> +                                   u32 shift, u32 val, u32 delayus);
> >>>>   void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem
> >>>> *base,
> >>>>                     u32 offset, u32 val);
> >>>>   void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void
> >>>> __iomem *base,
> >>>> diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
> >>>> index 4e0131b..7c7f41a 100644
> >>>> --- a/drivers/bus/mhi/core/main.c
> >>>> +++ b/drivers/bus/mhi/core/main.c
> >>>> @@ -4,6 +4,7 @@
> >>>>    *
> >>>>    */
> >>>>
> >>>> +#include <linux/delay.h>
> >>>>   #include <linux/device.h>
> >>>>   #include <linux/dma-direction.h>
> >>>>   #include <linux/dma-mapping.h>
> >>>> @@ -37,6 +38,28 @@ int __must_check mhi_read_reg_field(struct
> >>>> mhi_controller *mhi_cntrl,
> >>>>          return 0;
> >>>>   }
> >>>>
> >>>> +int __must_check mhi_poll_reg_field(struct mhi_controller *mhi_cntrl,
> >>>> +                                   void __iomem *base, u32 offset,
> >>>> +                                   u32 mask, u32 shift, u32 val,
> >>>> u32 delayus)
> >>>> +{
> >>>> +       int ret;
> >>>> +       u32 out, retry = (mhi_cntrl->timeout_ms * 1000) / delayus;
> >>>> +
> >>>> +       while (retry--) {
> >>>> +               ret = mhi_read_reg_field(mhi_cntrl, base, offset,
> >>>> mask, shift,
> >>>> +                                        &out);
> >>>> +               if (ret)
> >>>> +                       return ret;
> >>>> +
> >>>> +               if (out == val)
> >>>> +                       return 0;
> >>>> +
> >>>> +               udelay(delayus);
> >>>
> >>> Have you read my previous comment?
> >>> Do you really want to risk hogging the CPU for several seconds? we
> >>> know that some devices take several seconds to start/boot.
> >>> Why not using msleep variant here?
> >>
> >> usleep_range() if there is a desire to stay in us units?
> >>
> >> Given that the use of this function is for 25ms in one case, I wonder
> >> if this warning is applicable:
> >> https://elixir.bootlin.com/linux/latest/source/include/linux/delay.h#L28
> >>
> >> Counter point, 1ms latency over PCIe is not unusual.  I know we've
> >> removed the PCIe dependencies from MHI, but PCIe is the real usecase
> >> at this time.  Seems like this function could behave a bit weird if
> >> the parameter to udelay is something like "100", but the
> >> mhi_read_reg_field() call takes significantly longer than that.  Feels
> >> like in some scenarios, we could actually exceed the timeout by a
> >> non-trivial margin.
> >>
> >> I guess I'm going back and forth in determining if us scale timing is
> >> a benefit in any way.
> > Thanks for all the inputs. I think a good idea here would be to use
> > fsleep()
> > API as we need to allow any timeout the caller specifies. Also, plan is to
> > drop the patch #3 in this series since that will require a busywait due to
> > the code being in panic path.
> >
> > I don't wish to accommodate another variable here for busywait but that
> > would be an option to pick sleep or delay depending on the caller's path.
> >
> > Please respond if there are any concerns.
>
> fsleep() would be some improvement, but I think there is still the issue
> Loic points out where if delayus is small, but timeout_ms is large (say
> 50us and 25s), this function will end up burning a lot of cpu cycles
> However that is likely an edge case, and if your poll cycle is that
> small, I think it should be assumed that the event is expected to happen
> quickly, so the full timeout should not be hit.

Well, my point is that during initial power_up, with a device
cold-booting, it can take several seconds for it to reach ready state
(not a corner case). That why timeout_ms can be as large as 20 seconds
for mhi_pci_modem. If polling is based on busy-wait, that means the
while loop will not let the CPU running anything else for several
seconds. Not sure what is the expected meaning of this timeout_ms in
first place... maybe I just use it badly.

Moreover, do we need microsecond latency on detecting ready
transition, this is not a critical path, right?

Regards,
Loic



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [Linux for Sparc]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux