On 07/10/2019 16:04, Ming Lei wrote:
On Mon, Oct 07, 2019 at 11:23:22AM +0100, John Garry wrote:
On 06/10/2019 03:45, Ming Lei wrote:
+ }
+}
+
+static int blk_mq_hctx_notify_online(unsigned int cpu, struct hlist_node *node)
+{
+ struct blk_mq_hw_ctx *hctx = hlist_entry_safe(node,
+ struct blk_mq_hw_ctx, cpuhp_online);
+ unsigned prev_cpu = -1;
+
+ while (true) {
+ unsigned next_cpu = cpumask_next_and(prev_cpu, hctx->cpumask,
+ cpu_online_mask);
+
+ if (next_cpu >= nr_cpu_ids)
+ break;
+
+ /* return if there is other online CPU on this hctx */
+ if (next_cpu != cpu)
+ return 0;
+
+ prev_cpu = next_cpu;
+ }
+
+ set_bit(BLK_MQ_S_INTERNAL_STOPPED, &hctx->state);
+ blk_mq_drain_inflight_rqs(hctx);
+
Does this do the same:
{
struct blk_mq_hw_ctx *hctx = hlist_entry_safe(node,
struct blk_mq_hw_ctx, cpuhp_online);
cpumask_var_t tmp;
cpumask_and(tmp, hctx->cpumask, cpu_online_mask);
/* test if there is any other cpu online in the hctx cpu mask */
if (cpumask_any_but(tmp, cpu) < nr_cpu_ids)
return 0;
set_bit(BLK_MQ_S_INTERNAL_STOPPED, &hctx->state);
blk_mq_drain_inflight_rqs(hctx);
return 0;
}
If so, it's more readable and concise.
Yes, but we have to allocate space for 'tmp', that is what this patch
tries to avoid,
Yeah, I forgot about the extra complications of the cpumask offstack
stuff; but it does seem rarely used...
There is this:
{
struct blk_mq_hw_ctx *hctx = hlist_entry_safe(node,
struct blk_mq_hw_ctx, cpuhp_online);
if ((cpumask_next_and(-1, hctx->cpumask, cpu_online_mask) ==
cpu) &&
(cpumask_next_and(cpu, hctx->cpumask, cpu_online_mask) >=
nr_cpu_ids)) {
set_bit(BLK_MQ_S_INTERNAL_STOPPED, &hctx->state);
blk_mq_drain_inflight_rqs(hctx);
}
return 0;
}
... which looks effectively the same as yours, except a bit more
readable (ignoring the fixable spilling of lines) to me.
Thanks,
John
> given the logic isn't too complicated.
BTW, You could have added my Tested-by tags...
OK, I will add it in V3.
Thanks,
Ming
.