On Mon, 2020-06-29 at 11:06 +0000, Avri Altman wrote: > > > > Hi Avri > > > > On Mon, 2020-06-29 at 05:24 +0000, Avri Altman wrote: > > > Hi Bean, > > > > > > > > Hi Daejun > > > > > > > > Seems you intentionally ignored to give you comments on my > > > > suggestion. > > > > let me provide the reason. > > > > > > > > Before submitting your next version patch, please check your > > > > L2P > > > > mapping HPB reqeust submission logical algorithem. I have did > > > > performance comparison testing on 4KB, there are about 13% > > > > performance > > > > drop. Also the hit count is lower. I don't know if this is > > > > related > > > > to > > > > your current work queue scheduling, since you didn't add the > > > > timer > > > > for > > > > each HPB request. > > > > > > In device control mode, the various decisions, > > > and specifically those that are causing repetitive evictions, > > > are made by the device. > > > Is this the issue that you are referring to? > > > > > > > For this device mode, if HPB mapping table of the active region > > becomes > > dirty in the UFS device side, there is repetitive inactive rsp, but > > it > > is not the reason for the condition I mentioned here. > > > > > As for the driver, do you see any issue that is causing > > > unnecessary > > > latency? > > > > > > > In Daejun's patch, it now uses work_queue, and as long there is new > > RSP of > > thesubregion to be activated, the driver will queue "work" to this > > work > > queue, actually, this is deferred work. we don't know when it will > > be > > scheduled/finished. we need to optimize it. > > But those "to-do" lists are checked on every completion interrupt and > on every resume. > Do you see any scenario in which the "to-be-activated" or "to-be- > inactivate" work is getting starved? > let me run more testing cases, will back to you if there is new updates. Thanks, Bean