Re: [PATCH 3/3] hwmon: (coretemp) Fix core count limitation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2023-12-01 at 14:58 -0800, Ashok Raj wrote:
> On Fri, Dec 01, 2023 at 09:47:30AM -0800, Zhang, Rui wrote:
> > On Thu, 2023-11-30 at 18:08 -0800, Ashok Raj wrote:
> > > On Mon, Nov 27, 2023 at 09:16:51PM +0800, Zhang Rui wrote:
> > > >  
> > > > +static struct temp_data *get_tdata(struct platform_data
> > > > *pdata,
> > > > int cpu)
> > > > +{
> > > > +       struct temp_data *tdata;
> > > > +
> > > > +       mutex_lock(&pdata->core_data_lock);
> > > > +       list_for_each_entry(tdata, &pdata->core_data_list,
> > > > node) {
> > > > +               if (cpu >= 0 && !tdata->is_pkg_data && tdata-
> > > > > cpu_core_id == topology_core_id(cpu))
> > > > +                       goto found;
> > > > +               if (cpu < 0 && tdata->is_pkg_data)
> > > > +                       goto found;
> > > > +       }
> > > > +       tdata = NULL;
> > > 
> > > What used to be an array, is now a list? Is it possible to get
> > > the
> > > number
> > > of cores_per_package during initialization and allocate the per-
> > > core?
> > > You
> > > can still get them indexing from core_id and you can possibly
> > > lose
> > > the
> > > mutex and search?
> > > 
> > > I don't know this code well enough... Just a thought.
> > 
> > yeah, sadly cores_per_package is not available for now as I
> > mentioned
> > in the other email.
> 
> Couldn't we reuse the logic from Thomas's topology work that gives
> this
> upfront based on 0x1f?

this sounds duplicate work to me. To me, there is no such an urgent
requirement that is worth this whole effort.

> 
> > 
> > > 
> > > > +found:
> > > > +       mutex_unlock(&pdata->core_data_lock);
> > > > +       return tdata;
> > > > +}
> > > > +
> > > >  static int create_core_data(struct platform_device *pdev,
> > > > unsigned
> > > > int cpu,
> > > >                             int pkg_flag)
> > > >  {
> > > > @@ -498,37 +511,29 @@ static int create_core_data(struct
> > > > platform_device *pdev, unsigned int cpu,
> > > >         struct platform_data *pdata =
> > > > platform_get_drvdata(pdev);
> > > >         struct cpuinfo_x86 *c = &cpu_data(cpu);
> > > >         u32 eax, edx;
> > > > -       int err, index, attr_no;
> > > > +       int err, attr_no;
> > > >  
> > > >         if (!housekeeping_cpu(cpu, HK_TYPE_MISC))
> > > >                 return 0;
> > > >  
> > > > +       tdata = get_tdata(pdata, pkg_flag ? -1 : cpu);
> > > > +       if (tdata)
> > > > +               return -EEXIST;
> > > > +
> > > > +       tdata = init_temp_data(cpu, pkg_flag);
> > > 
> > > Is temp_data per_cpu or per_core?
> > 
> > it is per_core.
> > 
> > >  Wasn't sure if temp_data needs a CPU
> > > number there along with cpu_core_id
> > 
> > CPU number is needed to access the core temperature MSRs.
> 
> What if that cpu is currently offline? maybe you can simply search
> for_each_online_cpu() and match core_id from cpuinfo_x86?
> 
This is already handled in the cpu online/offline callbacks.
Each temp_data is always associated with an online CPU that can be used
to update the core temperature info.

thanks,
rui





[Index of Archives]     [LM Sensors]     [Linux Sound]     [ALSA Users]     [ALSA Devel]     [Linux Audio Users]     [Linux Media]     [Kernel]     [Gimp]     [Yosemite News]     [Linux Media]

  Powered by Linux