Hi
On 2019/3/22 23:21, Jens Axboe wrote:
On 3/22/19 9:04 AM, Peter Zijlstra wrote:
On Fri, Mar 22, 2019 at 04:01:16PM +0100, Peter Zijlstra wrote:
On Fri, Mar 22, 2019 at 10:48:17PM +0800, Yufen Yu wrote:
@@ -2710,7 +2710,7 @@ static struct blk_mq_hw_ctx *blk_mq_alloc_and_init_hctx(
return NULL;
}
- atomic_set(&hctx->nr_active, 0);
+ refcount_set(&hctx->nr_active, 0);
hctx->numa_node = node;
hctx->queue_num = hctx_idx;
That looks bogus, refcount_t cannot inc-from-zero.
I also don't see a single dec_and_test in that patch, which leads me to
believe nr_active is not in fact a refcount.
It isn't a refcount at all, it's just a count of how many queues are
active in a shared tag map scenario.
Sorry for the noise. And thanks a lot for review. I think I have no idea
for the difference
between count and refcount. So I think it can covert 'atomic' to
'refcount' anywhere
no matter if it is really needed. After all , 'refcount' has advantages
to check overflow and underflow.
Elena Reshetova have clearly summarized which atomic_t should be covert
to refcount_t [1], which was worth me to learn.
" atomic_t variables are currently used to implement reference
counters with the following properties:
- counter is initialized to 1 using atomic_set()
- a resource is freed upon counter reaching zero
- once counter reaches zero, its further
increments aren't allowed
- counter schema uses basic atomic operations
(set, inc, inc_not_zero, dec_and_test, etc.)
Such atomic variables should be converted to a newly provided
refcount_t type and API that prevents accidental counter overflows
and underflows. "
[1] https://lore.kernel.org/patchwork/patch/826782/
Yufen
Thanks