On Fri, 30 May 2008, Peter Zijlstra wrote: > The thing we generally do is, we add a lock to each per-cpu data item, > use raw_smp_processor_id() to obtain the current cpu's data lock the > thing and work from it - even if we are migrated away. > > For instance: > > struct kmem_cache_cpu { > ..... > spinlock_t lock; > } > > struct kmem_cache_cpu *get_cpu_slab(struct kmem_cache *s, int cpu) > { > struct kmem_cache_cpu *c = s->cpu_slab[cpu]; > spin_lock(&c->lock); > return c; Hmmm... Can we reschedule before spin_lock? It seems that preemption must already be off for this to work. > What this does is make a strong connection between data and concurrency > control. Your proposed scheme weakens the data<->concurrency relation > instead of making it stronger. Yes the cpu ops allow atomic per cpu ops without preemption / interrupt enable disable. I thought that would help -rt quit a bit. > Ah, we could still do the above by writing: > > struct kmem_cache_cpu *get_cpu_slab(struct kmem_cache *s) > { > struct kmem_cache_cpu *c = THIS_CPU(s->cpu_slab); > spin_lock(&c->lock); > return c; > } > > void put_cpu_slab(struct kmem_cache_cpu *c) > { > spin_unlock(&c->lock); > } > > Would it be possible to re-structure your API to also have these get/put > methods instead of just a get? I do not see a problem since you must already have preemption disabled when callin get_cpu_slab(). Otherwise you may take the lock on another processor if the process was rescheduled. -- To unsubscribe from this list: send the line "unsubscribe linux-arch" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html