Re: [PATCH v6 3/5] locking/qspinlock: Introduce CNA into the slow path of qspinlock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



+ linux-sparse mailing list

It seems like a bug in the way sparse handles “pure” functions that return
a pointer.

One of the pure functions in question is defined as following:
	static inline __pure
	struct mcs_spinlock *grab_mcs_node(struct mcs_spinlock *base, int idx)
	{
        	return &((struct qnode *)base + idx)->mcs;
	}

and the corresponding variable definition and the assignment statement that
produce a warning (in kernel/locking/qspinlock.c) are:
	struct mcs_spinlock *prev, *next, *node;
	…
	node = grab_mcs_node(node, idx);

The issue can be recreated without my patch with
	# sparse version: v0.6.1
	make ARCH=x86_64 allmodconfig
	make C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' kernel/locking/qspinlock.o


The warnings can be eliminated by adding an explicit cast, e.g.:

	node = (struct mcs_spinlock *)grab_mcs_node(node, idx);

but this seems wrong (unnecessary) to me.

Regards,
— Alex

> On Nov 10, 2019, at 4:30 PM, kbuild test robot <lkp@xxxxxxxxx> wrote:
> 
> Hi Alex,
> 
> Thank you for the patch! Perhaps something to improve:
> 
> [auto build test WARNING on linus/master]
> [cannot apply to v5.4-rc6 next-20191108]
> [if your patch is applied to the wrong git tree, please drop us a note to help
> improve the system. BTW, we also suggest to use '--base' option to specify the
> base tree in git format-patch, please see https://urldefense.proofpoint.com/v2/url?u=https-3A__stackoverflow.com_a_37406982&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=hIJsql5G3kZsA2K8s_1WK7096mEKsYe-jEraOUNhbDs&s=4bbPcLEtAedk_fBrSIBMWvdEslLtH5W28nZLbmMIgL8&e= ]
> 
> url:    https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_0day-2Dci_linux_commits_Alex-2DKogan_locking-2Dqspinlock-2DRename-2Dmcs-2Dlock-2Dunlock-2Dmacros-2Dand-2Dmake-2Dthem-2Dmore-2Dgeneric_20191109-2D180535&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=hIJsql5G3kZsA2K8s_1WK7096mEKsYe-jEraOUNhbDs&s=ydR3iBtEF-3XUySBCcPYJ8oqw_oNDB-liJdapTXeFeM&e= 
> base:   https://urldefense.proofpoint.com/v2/url?u=https-3A__git.kernel.org_pub_scm_linux_kernel_git_torvalds_linux.git&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=hIJsql5G3kZsA2K8s_1WK7096mEKsYe-jEraOUNhbDs&s=c4rCmFY0YTXCPiXW9d_BD0RN6WU6QGb64h1iyWNCm9A&e=  0058b0a506e40d9a2c62015fe92eb64a44d78cd9
> reproduce:
>        # apt-get install sparse
>        # sparse version: v0.6.1-21-gb31adac-dirty
>        make ARCH=x86_64 allmodconfig
>        make C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__'
> 
> If you fix the issue, kindly add following tag
> Reported-by: kbuild test robot <lkp@xxxxxxxxx>
> 
> 
> sparse warnings: (new ones prefixed by >>)
> 
>   kernel/locking/qspinlock.c:450:14: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *[assigned] node @@    got ct mcs_spinlock *[assigned] node @@
>   kernel/locking/qspinlock.c:450:14: sparse:    expected struct mcs_spinlock *[assigned] node
>   kernel/locking/qspinlock.c:450:14: sparse:    got struct mcs_spinlock [pure] *
>   kernel/locking/qspinlock.c:498:22: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *prev @@    got struct struct mcs_spinlock *prev @@
>   kernel/locking/qspinlock.c:498:22: sparse:    expected struct mcs_spinlock *prev
>   kernel/locking/qspinlock.c:498:22: sparse:    got struct mcs_spinlock [pure] *
>>> kernel/locking/qspinlock_cna.h:141:60: sparse: sparse: incorrect type in initializer (different modifiers) @@    expected struct mcs_spinlock *tail_2nd @@    got struct struct mcs_spinlock *tail_2nd @@
>>> kernel/locking/qspinlock_cna.h:141:60: sparse:    expected struct mcs_spinlock *tail_2nd
>>> kernel/locking/qspinlock_cna.h:141:60: sparse:    got struct mcs_spinlock [pure] *
>   kernel/locking/qspinlock.c:450:14: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *[assigned] node @@    got ct mcs_spinlock *[assigned] node @@
>   kernel/locking/qspinlock.c:450:14: sparse:    expected struct mcs_spinlock *[assigned] node
>   kernel/locking/qspinlock.c:450:14: sparse:    got struct mcs_spinlock [pure] *
>   kernel/locking/qspinlock.c:498:22: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *prev @@    got struct struct mcs_spinlock *prev @@
>   kernel/locking/qspinlock.c:498:22: sparse:    expected struct mcs_spinlock *prev
>   kernel/locking/qspinlock.c:498:22: sparse:    got struct mcs_spinlock [pure] *
>>> kernel/locking/qspinlock_cna.h:107:18: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *tail_2nd @@    got struct struct mcs_spinlock *tail_2nd @@
>   kernel/locking/qspinlock_cna.h:107:18: sparse:    expected struct mcs_spinlock *tail_2nd
>   kernel/locking/qspinlock_cna.h:107:18: sparse:    got struct mcs_spinlock [pure] *
>>> kernel/locking/qspinlock_cna.h:240:61: sparse: sparse: incorrect type in argument 2 (different modifiers) @@    expected struct mcs_spinlock *pred_start @@    got struct struct mcs_spinlock *pred_start @@
>>> kernel/locking/qspinlock_cna.h:240:61: sparse:    expected struct mcs_spinlock *pred_start
>   kernel/locking/qspinlock_cna.h:240:61: sparse:    got struct mcs_spinlock [pure] *
>   kernel/locking/qspinlock_cna.h:252:26: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *tail_2nd @@    got struct struct mcs_spinlock *tail_2nd @@
>   kernel/locking/qspinlock_cna.h:252:26: sparse:    expected struct mcs_spinlock *tail_2nd
>   kernel/locking/qspinlock_cna.h:252:26: sparse:    got struct mcs_spinlock [pure] *
>   kernel/locking/qspinlock.c:450:14: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *[assigned] node @@    got ct mcs_spinlock *[assigned] node @@
>   kernel/locking/qspinlock.c:450:14: sparse:    expected struct mcs_spinlock *[assigned] node
>   kernel/locking/qspinlock.c:450:14: sparse:    got struct mcs_spinlock [pure] *
>   kernel/locking/qspinlock.c:498:22: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *prev @@    got struct struct mcs_spinlock *prev @@
>   kernel/locking/qspinlock.c:498:22: sparse:    expected struct mcs_spinlock *prev
>   kernel/locking/qspinlock.c:498:22: sparse:    got struct mcs_spinlock [pure] *
> 
> vim +141 kernel/locking/qspinlock_cna.h
> 
>    90	
>    91	static inline bool cna_try_change_tail(struct qspinlock *lock, u32 val,
>    92					       struct mcs_spinlock *node)
>    93	{
>    94		struct mcs_spinlock *head_2nd, *tail_2nd;
>    95		u32 new;
>    96	
>    97		/* If the secondary queue is empty, do what MCS does. */
>    98		if (node->locked <= 1)
>    99			return __try_clear_tail(lock, val, node);
>   100	
>   101		/*
>   102		 * Try to update the tail value to the last node in the secondary queue.
>   103		 * If successful, pass the lock to the first thread in the secondary
>   104		 * queue. Doing those two actions effectively moves all nodes from the
>   105		 * secondary queue into the main one.
>   106		 */
>> 107		tail_2nd = decode_tail(node->locked);
>   108		head_2nd = tail_2nd->next;
>   109		new = ((struct cna_node *)tail_2nd)->encoded_tail + _Q_LOCKED_VAL;
>   110	
>   111		if (atomic_try_cmpxchg_relaxed(&lock->val, &val, new)) {
>   112			/*
>   113			 * Try to reset @next in tail_2nd to NULL, but no need to check
>   114			 * the result - if failed, a new successor has updated it.
>   115			 */
>   116			cmpxchg_relaxed(&tail_2nd->next, head_2nd, NULL);
>   117			arch_mcs_pass_lock(&head_2nd->locked, 1);
>   118			return true;
>   119		}
>   120	
>   121		return false;
>   122	}
>   123	
>   124	/*
>   125	 * cna_splice_tail -- splice nodes in the main queue between [first, last]
>   126	 * onto the secondary queue.
>   127	 */
>   128	static void cna_splice_tail(struct mcs_spinlock *node,
>   129				    struct mcs_spinlock *first,
>   130				    struct mcs_spinlock *last)
>   131	{
>   132		/* remove [first,last] */
>   133		node->next = last->next;
>   134	
>   135		/* stick [first,last] on the secondary queue tail */
>   136		if (node->locked <= 1) { /* if secondary queue is empty */
>   137			/* create secondary queue */
>   138			last->next = first;
>   139		} else {
>   140			/* add to the tail of the secondary queue */
>> 141			struct mcs_spinlock *tail_2nd = decode_tail(node->locked);
>   142			struct mcs_spinlock *head_2nd = tail_2nd->next;
>   143	
>   144			tail_2nd->next = first;
>   145			last->next = head_2nd;
>   146		}
>   147	
>   148		node->locked = ((struct cna_node *)last)->encoded_tail;
>   149	}
>   150	
>   151	/*
>   152	 * cna_scan_main_queue - scan the main waiting queue looking for the first
>   153	 * thread running on the same NUMA node as the lock holder. If found (call it
>   154	 * thread T), move all threads in the main queue between the lock holder and
>   155	 * T to the end of the secondary queue and return 0; otherwise, return the
>   156	 * encoded pointer of the last scanned node in the primary queue (so a
>   157	 * subsequent scan can be resumed from that node)
>   158	 *
>   159	 * Schematically, this may look like the following (nn stands for numa_node and
>   160	 * et stands for encoded_tail).
>   161	 *
>   162	 *   when cna_scan_main_queue() is called (the secondary queue is empty):
>   163	 *
>   164	 *  A+------------+   B+--------+   C+--------+   T+--------+
>   165	 *   |mcs:next    | -> |mcs:next| -> |mcs:next| -> |mcs:next| -> NULL
>   166	 *   |mcs:locked=1|    |cna:nn=0|    |cna:nn=2|    |cna:nn=1|
>   167	 *   |cna:nn=1    |    +--------+    +--------+    +--------+
>   168	 *   +----------- +
>   169	 *
>   170	 *   when cna_scan_main_queue() returns (the secondary queue contains B and C):
>   171	 *
>   172	 *  A+----------------+    T+--------+
>   173	 *   |mcs:next        | ->  |mcs:next| -> NULL
>   174	 *   |mcs:locked=C.et | -+  |cna:nn=1|
>   175	 *   |cna:nn=1        |  |  +--------+
>   176	 *   +--------------- +  +-----+
>   177	 *                             \/
>   178	 *          B+--------+   C+--------+
>   179	 *           |mcs:next| -> |mcs:next| -+
>   180	 *           |cna:nn=0|    |cna:nn=2|  |
>   181	 *           +--------+    +--------+  |
>   182	 *               ^                     |
>   183	 *               +---------------------+
>   184	 *
>   185	 * The worst case complexity of the scan is O(n), where n is the number
>   186	 * of current waiters. However, the amortized complexity is close to O(1),
>   187	 * as the immediate successor is likely to be running on the same node once
>   188	 * threads from other nodes are moved to the secondary queue.
>   189	 */
>   190	static u32 cna_scan_main_queue(struct mcs_spinlock *node,
>   191				       struct mcs_spinlock *pred_start)
>   192	{
>   193		struct cna_node *cn = (struct cna_node *)node;
>   194		struct cna_node *cni = (struct cna_node *)READ_ONCE(pred_start->next);
>   195		struct cna_node *last;
>   196		int my_numa_node = cn->numa_node;
>   197	
>   198		/* find any next waiter on 'our' NUMA node */
>   199		for (last = cn;
>   200		     cni && cni->numa_node != my_numa_node;
>   201		     last = cni, cni = (struct cna_node *)READ_ONCE(cni->mcs.next))
>   202			;
>   203	
>   204		/* if found, splice any skipped waiters onto the secondary queue */
>   205		if (cni) {
>   206			if (last != cn)	/* did we skip any waiters? */
>   207				cna_splice_tail(node, node->next,
>   208						(struct mcs_spinlock *)last);
>   209			return 0;
>   210		}
>   211	
>   212		return last->encoded_tail;
>   213	}
>   214	
>   215	__always_inline u32 cna_pre_scan(struct qspinlock *lock,
>   216					  struct mcs_spinlock *node)
>   217	{
>   218		struct cna_node *cn = (struct cna_node *)node;
>   219	
>   220		cn->pre_scan_result = cna_scan_main_queue(node, node);
>   221	
>   222		return 0;
>   223	}
>   224	
>   225	static inline void cna_pass_lock(struct mcs_spinlock *node,
>   226					 struct mcs_spinlock *next)
>   227	{
>   228		struct cna_node *cn = (struct cna_node *)node;
>   229		struct mcs_spinlock *next_holder = next, *tail_2nd;
>   230		u32 val = 1;
>   231	
>   232		u32 scan = cn->pre_scan_result;
>   233	
>   234		/*
>   235		 * check if a successor from the same numa node has not been found in
>   236		 * pre-scan, and if so, try to find it in post-scan starting from the
>   237		 * node where pre-scan stopped (stored in @pre_scan_result)
>   238		 */
>   239		if (scan > 0)
>> 240			scan = cna_scan_main_queue(node, decode_tail(scan));
> 
> ---
> 0-DAY kernel test infrastructure                 Open Source Technology Center
> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.01.org_hyperkitty_list_kbuild-2Dall-40lists.01.org&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=hIJsql5G3kZsA2K8s_1WK7096mEKsYe-jEraOUNhbDs&s=VprTTTCiBtYDpGK-n61PqoAYogz7_cX60cLNj_O8K2E&e=  Intel Corporation





[Index of Archives]     [Newbies FAQ]     [LKML]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Trinity Fuzzer Tool]

  Powered by Linux