Thanks. Will apply the patch to drm-next.
Best Regards Rex From: Joe Perches <joe@xxxxxxxxxxx>
Sent: Thursday, March 22, 2018 3:02 AM To: Colin King; Koenig, Christian; Zhou, David(ChunMing); David Airlie; Zhu, Rex; amd-gfx@xxxxxxxxxxxxxxxxxxxxx; dri-devel@xxxxxxxxxxxxxxxxxxxxx Cc: kernel-janitors@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx Subject: Re: [PATCH] drm/amd/pp: use mlck_table.count for array loop index limit On Wed, 2018-03-21 at 18:26 +0000, Colin King wrote:
> From: Colin Ian King <colin.king@xxxxxxxxxxxxx> > > The for-loops process data in the mclk_table but use slck_table.count > as the loop index limit. I believe these are cut-n-paste errors from > the previous almost identical loops as indicated by static analysis. > Fix these. Nice tool. > diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c [] > @@ -855,7 +855,7 @@ static int smu7_odn_initial_default_setting(struct pp_hwmgr *hwmgr) > > odn_table->odn_memory_clock_dpm_levels.num_of_pl = > data->golden_dpm_table.mclk_table.count; > - for (i=0; i<data->golden_dpm_table.sclk_table.count; i++) { > + for (i=0; i<data->golden_dpm_table.mclk_table.count; i++) { > odn_table->odn_memory_clock_dpm_levels.entries[i].clock = > data->golden_dpm_table.mclk_table.dpm_levels[i].value; > odn_table->odn_memory_clock_dpm_levels.entries[i].enabled = true; Probably more sensible to use temporaries too. Maybe something like the below (also trivially reduces object size) --- drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c index df2a312ca6c9..339b897146af 100644 --- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c @@ -834,6 +834,7 @@ static int smu7_odn_initial_default_setting(struct pp_hwmgr *hwmgr) struct phm_ppt_v1_clock_voltage_dependency_table *dep_sclk_table; struct phm_ppt_v1_clock_voltage_dependency_table *dep_mclk_table; + struct phm_odn_performance_level *entries; if (table_info == NULL) return -EINVAL; @@ -843,11 +844,11 @@ static int smu7_odn_initial_default_setting(struct pp_hwmgr *hwmgr) odn_table->odn_core_clock_dpm_levels.num_of_pl = data->golden_dpm_table.sclk_table.count; + entries = odn_table->odn_core_clock_dpm_levels.entries; for (i=0; i<data->golden_dpm_table.sclk_table.count; i++) { - odn_table->odn_core_clock_dpm_levels.entries[i].clock = - data->golden_dpm_table.sclk_table.dpm_levels[i].value; - odn_table->odn_core_clock_dpm_levels.entries[i].enabled = true; - odn_table->odn_core_clock_dpm_levels.entries[i].vddc = dep_sclk_table->entries[i].vddc; + entries[i].clock = data->golden_dpm_table.sclk_table.dpm_levels[i].value; + entries[i].enabled = true; + entries[i].vddc = dep_sclk_table->entries[i].vddc; } smu7_get_voltage_dependency_table(dep_sclk_table, @@ -855,11 +856,11 @@ static int smu7_odn_initial_default_setting(struct pp_hwmgr *hwmgr) odn_table->odn_memory_clock_dpm_levels.num_of_pl = data->golden_dpm_table.mclk_table.count; - for (i=0; i<data->golden_dpm_table.sclk_table.count; i++) { - odn_table->odn_memory_clock_dpm_levels.entries[i].clock = - data->golden_dpm_table.mclk_table.dpm_levels[i].value; - odn_table->odn_memory_clock_dpm_levels.entries[i].enabled = true; - odn_table->odn_memory_clock_dpm_levels.entries[i].vddc = dep_mclk_table->entries[i].vddc; + entries = odn_table->odn_memory_clock_dpm_levels.entries; + for (i=0; i<data->golden_dpm_table.mclk_table.count; i++) { + entries[i].clock = data->golden_dpm_table.mclk_table.dpm_levels[i].value; + entries[i].enabled = true; + entries[i].vddc = dep_mclk_table->entries[i].vddc; } smu7_get_voltage_dependency_table(dep_mclk_table, |
_______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel