Re: [RFC] [PATCH v2 00/11] Optimize spi_sync path

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 16 Jun 2022 15:55:29 +0100
Mark Brown <broonie@xxxxxxxxxx> wrote:

> On Thu, Jun 16, 2022 at 04:13:23PM +0200, David Jander wrote:
> > Mark Brown <broonie@xxxxxxxxxx> wrote:  
> > > On Wed, Jun 15, 2022 at 02:46:23PM +0200, David Jander wrote:  
> 
> > > I've given this a first pass and it looks sensible so far - I'll need to
> > > give it a more thorough look but I'd expect it should be fine.  The
> > > numbers certainly look good.  
> 
> > The current patch set probably needs to get partly squashed, since there are a
> > few patches that undo changes from a previous patch. I left them like this in
> > order to hopefully make the step by step mutation more clear for review.  
> 
> Yes, there's a bit of stuff.  I think it's based off your previous
> proposed patch too?

Yes, in big part. I removed the API change, and all further optimizations and
improvements are done step by step on top, like your suggestion to introduce
the completion in __pump_messages and after that optimizing it further. Ideally
I should maybe have tried to split up patch 2 a bit more.

> > I had some doubts about patch 11, since it introduces 2 new members to struct
> > spi_controller. I was trying to keep the pollution down, but I couldn't find a
> > better way to do this optimization. Any suggestions? Maybe a better name/place
> > for these flags?  
> 
> Not really - I'm not too concerned about individual flags since we don't
> have so many SPI controllers in a system, it's not like it's a per task
> overhead or similar.

Ok, then we leave it as is. I was looking for a place that grouped "private"
or "internal" struct members, but couldn't fine one really. SPI drivers
looking at these wouldn't make sense I guess.

> > Ideally this would get as much different hardware testing as possible before
> > going further upstream. Do you have access to some platforms suitable for
> > stressing SPI with multiple clients simultaneously? Known "problematic"
> > controllers maybe?  
> 
> Well, the fastest way to get it into a wide range of CI is for me to
> apply it so people who test -next will start covering it...  I was going
> to kick it into my test branch KernelCI once I've got it reviewed
> properly which will get at least some boot testing on a bunch of
> platforms.

Ah, great. I will see if I can get it tested on some more other platforms from
our side.

> For testing the main thing that'd be nice for testing would probably be
> coverage of controllers that don't block in transfer_one_message() and
> those that complete in interrupt context while blocking in there.

Ah, yes, that would be ideal. spi-pl022.c and spi-axi-spi-engine.c do this
AFAIK.
Also, if someone could make some independent performance comparisons of
before/after this series and the per-cpu stats patch, that would be very
interesting. I don't like people having to trust me on my word about the
gains ;-)

Best regards,

-- 
David Jander




[Index of Archives]     [Linux Kernel]     [Linux ARM (vger)]     [Linux ARM MSM]     [Linux Omap]     [Linux Arm]     [Linux Tegra]     [Fedora ARM]     [Linux for Samsung SOC]     [eCos]     [Linux Fastboot]     [Gcc Help]     [Git]     [DCCP]     [IETF Announce]     [Security]     [Linux MIPS]     [Yosemite Campsites]

  Powered by Linux