Re: [PATCH v2] PCI: Add support for LTR

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 17, 2020 at 10:06:43PM +0530, Puranjay Mohan wrote:
> On Wed, Sep 16, 2020 at 3:31 AM Bjorn Helgaas <helgaas@xxxxxxxxxx> wrote:
> > On Tue, Aug 25, 2020 at 11:31:31PM +0530, Puranjay Mohan wrote:

> > > +     dev = pci_upstream_bridge(dev);
> > > +     while (dev) {
> > > +             max_snoop_sum += dev->max_snoop_latency;
> > > +             max_nosnoop_sum += dev->max_nosnoop_latency;
> >
> > dev->max_snoop_latency and dev->max_nosnoop_latency are not simple
> > scalars, are they?  Aren't they 3 bits of scale and 10 bits of value?
> > I don't think adding these is as easy as "+=" except in the simple
> > case when the scale is "000", i.e., "use the 10 bits of value as-is".
> >
> > I think we have to decode scale * latency for each device in the path,
> > add all those up, then re-encode using the appropriate scale for the
> > config write below.
>
> I was thinking about it. If we use 2 more variables and store the
> scale and value separately, then It will become easy.
> we can add the values directly but, as you said we can't add the
> scales, I will think about this more.

Adding more things to struct pci_dev consumes that space permanently
even though it's only needed during enumeration.  This LTR init is
only done once per device, so there's no need to speed it up by adding
more variables.

You'll just have to see how it looks when you code it up.



[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux