Re: Computation of return value being discarded in get_cpu_power() in drivers/platform/x86/intel_ips.c

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



+ Jesse

Quoting Colin Ian King (2021-06-09 14:50:07)
> Hi,
> 
> I was reviewing some old unassigned variable warnings from static
> analysis by Coverity and found an issue introduced with the following
> commit:
> 
> commit aa7ffc01d254c91a36bf854d57a14049c6134c72
> Author: Jesse Barnes <jbarnes@xxxxxxxxxxxxxxxx>
> Date:   Fri May 14 15:41:14 2010 -0700
> 
>     x86 platform driver: intelligent power sharing driver
> 
> The analysis is as follows:
> 
> drivers/platform/x86/intel_ips.c
> 
>  871 static u32 get_cpu_power(struct ips_driver *ips, u32 *last, int period)
>  872 {
>  873        u32 val;
>  874        u32 ret;
>  875
>  876        /*
>  877         * CEC is in joules/65535.  Take difference over time to
>  878         * get watts.
>  879         */
>  880        val = thm_readl(THM_CEC);
>  881
>  882        /* period is in ms and we want mW */
>  883        ret = (((val - *last) * 1000) / period);
> 
> Unused value (UNUSED_VALUE)
> assigned_value:  Assigning value from ret * 1000U / 65535U to ret here,
> but that stored value is not used.
> 
>  884        ret = (ret * 1000) / 65535;
>  885        *last = val;
>  886
>  887        return 0;
>  888 }
> 
> I'm really not sure why ret is being calculated on lines 883,884 and not
> being used. Should that be *last = ret on line 885? Looks suspect anyhow.

According to git blame code seems to have been disabled intentionally by the
following commit:

commit 96f3823f537088c13735cfdfbf284436c802352a
Author: Jesse Barnes <jbarnes@xxxxxxxxxxxxxxxx>
Date:   Tue Oct 5 14:50:59 2010 -0400

    [PATCH 2/2] IPS driver: disable CPU turbo
    
    The undocumented interface we're using for reading CPU power seems to be
    overreporting power.  Until we figure out how to correct it, disable CPU
    turbo and power reporting to be safe.  This will keep the CPU within default
    limits and still allow us to increase GPU frequency as needed.

Maybe wrap the code after thm_readl() in #if 0 in case somebody ends up
wanting to fix it? Or eliminate completely.

In theory the thm_readl() may affect the system behavior so would not
remove that for extra paranoia.

Regards, Joonas




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux