Re: [V2] mmc: sdhci-pci-gli: Improve Random 4K Read Performance of GL9763E

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, May 30, 2022 at 5:03 PM Christian Löhle <CLoehle@xxxxxxxxxxxxxx> wrote:
>
> >This patch is based on patch [1] and adopt Adrian's comment.
> >
> >Due to flaws in hardware design, GL9763E takes long time to exit from L1
> >state. The I/O performance will suffer severe impact if it often enter
> >and exit L1 state.
> >
> >Unfortunately, entering and exiting L1 state is signal handshake in
> >physical layer, software knows nothiong about it. The only way to stop
> >entering L1 state is to disable hardware LPM negotiation on GL9763E.
> >
> >To improve read performance and take battery life into account, we reject
> >L1 negotiation while executing MMC_READ_MULTIPLE_BLOCK command and enable
> >L1 negotiation again when receiving non-MMC_READ_MULTIPLE_BLOCK command.
> >
>
> Could you describe the impact for people unfamilar with the GL9763E?
> Does this essientially disable low-power mode if the controller serviced a CMD18 last?
> (which will be most of the (idle) time for reasonable scenarios, right?)
> Or what exactly is the LPM negotation doing?=
> Hyperstone GmbH | Reichenaustr. 39a  | 78467 Konstanz
> Managing Director: Dr. Jan Peter Berns.
> Commercial register of local courts: Freiburg HRB381782
>

The I/O request flow can be simplified as below:
        request received --> mmc_command --> wait command
complete(data transfer phase)
        --> request complete --> wait-for-next-request
If the time interval between 2 stages exceeds L1 entry delay time(21
us for GL9763E), PCIe
LINK layer will enter L1 state and kernel/driver cannot know when it
occurred. When PCIe
host is going to send message/command, its LINK will exit L1 state
first. GL9763E also exits
L1 state simultaneously, but it takes a little time to get back to L0
state. If we let GL9763E
enter and exit L1 state freely, only 20% of read performance remains.
Hence, we decide to
disable LPM negotiation during READ_MULTIPLE_BLOCK command.

Considering that the PCIe LINK will also enter L1 state during
wait-for-next-request stage,
LPM negotiation also needs to be disabled in this stage. That's why we
enable/disable LPM
negotiation at the point which request received. I give an example as follows:
        CMD18 --> disable LPM negotiation --> CMD18 done
        --> CMD18 --> keep LPM negotiation disabled --> CMD18 done
        --> CMD17 --> enable LPM negotiation --> CMD17 done
        --> CMD17 --> keep LPM negotiation enabled --> CMD17 done
        --> CMD18 --> disable LPM negotiation --> CMD18 done

Hope the explanation above can answer your question.

regards,
Jason Lai




[Index of Archives]     [Linux Memonry Technology]     [Linux USB Devel]     [Linux Media]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux