Re: [PATCH 1/1] Revert "scsi: ufs: core: Only suspend clock scaling if scaling down"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 2024-03-05 at 12:59 +0530, Ram Prakash Gupta wrote:
>  	 
> External email : Please do not click links or open attachments until
> you have verified the sender or the content.
>  
> 
> On 2/29/2024 1:21 PM, Peter Wang (王信友) wrote:
> > On Wed, 2024-02-28 at 11:04 +0530, Ram Prakash Gupta wrote:
> >>   
> >> External email : Please do not click links or open attachments
> until
> >> you have verified the sender or the content.
> >>   This reverts commit 1d969731b87f122108c50a64acfdbaa63486296e.
> >> Approx 28% random perf IO degradation is observed by suspending
> clk
> >> scaling only when clks are scaled down. Concern for original fix
> was
> >> power consumption, which is already taken care by clk gating by
> >> putting
> >> the link into hibern8 state.
> >>
> >> Signed-off-by: Ram Prakash Gupta <quic_rampraka@xxxxxxxxxxx>
> >> ---
> >>   drivers/ufs/core/ufshcd.c | 2 +-
> >>   1 file changed, 1 insertion(+), 1 deletion(-)
> >>
> >> diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
> >> index c416826762e9..f6be18db031c 100644
> >> --- a/drivers/ufs/core/ufshcd.c
> >> +++ b/drivers/ufs/core/ufshcd.c
> >> @@ -1586,7 +1586,7 @@ static int ufshcd_devfreq_target(struct
> device
> >> *dev,
> >>   ktime_to_us(ktime_sub(ktime_get(), start)), ret);
> >>   
> >>   out:
> >> -if (sched_clk_scaling_suspend_work && !scale_up)
> >> +if (sched_clk_scaling_suspend_work)
> >>   queue_work(hba->clk_scaling.workq,
> >>      &hba->clk_scaling.suspend_work);
> >>   
> >> -- 
> >> 2.17.1
> > 
> > Hi Ram,
> > 
> > It is logic wrong to keep high gear when no read/write traffic.
> > Even high gear turn off clock and enter hibernate, there still have
> > other power consume hardhware to keep IO in high gear, ex. CPU
> latency,
> > CPU power.
> > 
> By CPU latency and power, do you mean the QoS/bus vote from ufs
> driver?
> if yes, in that case if clk gating kicks in, it means ufs is already
> in 
> hibernate state and there is no point keeping the votes (pm qos &
> bus 
> votes) from ufs driver. And as part of clk gating, qos and other
> votes 
> on SoC can be removed then no power concern would be there.
> 

Not only pm qos vote, but also vcore raise is needed for mediatek hw
design. And this core voltage is used for our entire SoC. Which means
The power impact is very huge for mediatek.


> Now with your implementation, since scaling is not suspended, in
> next 
> window of devfreq polling will get lowest possible load when active 
> request count is zero, because of which devfreq will scale down the 
> clock. So next request would always be completed with low clock 
> frequency, which is not desirable from performance point of view 

If a polling period without any IO traffic, it should scale down right?
If IO is busy then scale up. 
If IO is not busy then scale down.
It is basic logic.


> when 
> power consumption is already taken care in clk gating.
> 
> > Besides, clock scaling is designed for power concern, not for
> > performance. If you want to keep high performance, you can just
> turn
> > off clock scaling and keep in highest gear.
> > 
> I think its about striking a right balance. And if there is really a
> big
> power concern on mediatek boards, clock gating can be made bit more 
> aggressive there by removing the all votes (qos, bus) from ufs driver
> on 
> SoC as part of clock gating. Also is clock scaling disabled on
> mediatek 
> platforms?
> 

Mediatek has use clock gating.
But, there still have a window after enter auto-hibernate and clock
gati
ng. This window power waste is not reasonalbe.


> > Finally, mediatek dosen't suffer performance drop with this patch.
> > Could you help list the test procedure and performance drop data
> more
> > detail? I am curious that in what scenario your random drop 28%.
> > And I think your dvfs parameter could be the drop reason.
> > 
> There is no specific environment or procedure used, I am just using 
> antutu benchmark to get the numbers. And random IO numbers are
> degraded 
> by approx 28%.
> 

Have you try another dvfs setting? 
For example, enlarge polling period to make scale down harder?

Anyway, I think it is not correct to benefit from performance gain
through a kernel bug (scale up when no IO on-going)


Thanks.
Peter





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux