[bug report] mm/mempolicy: Weighted Interleave Auto-tuning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Joshua Hahn,

Commit aab5f6eb05fd ("mm/mempolicy: Weighted Interleave Auto-tuning")
from Feb 7, 2025 (linux-next), leads to the following Smatch static
checker warning:

	mm/mempolicy.c:220 mempolicy_set_node_perf()
	warn: assigned value is less than '1844674407370955161'

mm/mempolicy.c
    209 int mempolicy_set_node_perf(unsigned int node, struct access_coordinate *coords)
    210 {
    211         uint64_t *old_bw, *new_bw;
    212         uint64_t bw_val;
    213         u8 *old_iw, *new_iw;
    214 
    215         /*
    216          * Bandwidths above this limit cause rounding errors when reducing
    217          * weights. This value is ~16 exabytes, which is unreasonable anyways.

I see this comment about exabytes

    218          */
    219         bw_val = min(coords->read_bandwidth, coords->write_bandwidth);
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
but these values are u32s

--> 220         if (bw_val > (U64_MAX / 10))
                              ^^^^^^^^^^^^
There is no way they're going to be more than U64_MAX / 10.

    221                 return -EINVAL;
    222 
    223         new_bw = kcalloc(nr_node_ids, sizeof(uint64_t), GFP_KERNEL);
    224         if (!new_bw)
    225                 return -ENOMEM;
    226 
    227         new_iw = kcalloc(nr_node_ids, sizeof(u8), GFP_KERNEL);
    228         if (!new_iw) {
    229                 kfree(new_bw);
    230                 return -ENOMEM;
    231         }
    232 
    233         /*
    234          * Update bandwidth info, even in manual mode. That way, when switching
    235          * to auto mode in the future, iw_table can be overwritten using
    236          * accurate bw data.
    237          */
    238         mutex_lock(&iw_table_lock);
    239         old_bw = node_bw_table;
    240         old_iw = rcu_dereference_protected(iw_table,
    241                                            lockdep_is_held(&iw_table_lock));
    242 
    243         if (old_bw)
    244                 memcpy(new_bw, old_bw, nr_node_ids * sizeof(uint64_t));
    245         new_bw[node] = bw_val;
    246         node_bw_table = new_bw;
    247 
    248         if (weighted_interleave_auto) {
    249                 reduce_interleave_weights(new_bw, new_iw);
    250         } else if (old_iw) {
    251                 /*
    252                  * The first time mempolicy_set_node_perf is called, old_iw
    253                  * (iw_table) is null. If that is the case, assign a zeroed
    254                  * table to it. Otherwise, free the newly allocated iw_table.
    255                  */
    256                 mutex_unlock(&iw_table_lock);
    257                 kfree(new_iw);
    258                 kfree(old_bw);
    259                 return 0;
    260         }
    261 
    262         rcu_assign_pointer(iw_table, new_iw);
    263         mutex_unlock(&iw_table_lock);
    264         synchronize_rcu();
    265         kfree(old_iw);
    266         kfree(old_bw);
    267         return 0;
    268 }

regards,
dan carpenter




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux