On 05/04/2015 10:20 AM, Peter Zijlstra wrote:
I changed it to the below; I've not gotten around to compiling or even running it yet :-( The biggest change is the pv_hash/pv_unhash functions, which I've rewritten to hopefully be clearer (and also hopefully not wrecked them). I took out the cacheline sized structure which takes out that double loop and simplifies things. I've also added some comments which hopefully explain how/why we ended up with this exact scheme. I've also moved the __pv_queue_spin_unlock() function to the tail, such that we keep the 'wait'/'kick' order for both node and head. In any case, like I just wrote on the other email, I've stuck some things in my queue (up to and including patch 11) and if it all works out we can continue from there. --- Subject: pvqspinlock: Implement simple paravirt support for the qspinlock From: Waiman Long<Waiman.Long@xxxxxx> Date: Fri, 24 Apr 2015 14:56:37 -0400 Provide a separate (second) version of the spin_lock_slowpath for paravirt along with a special unlock path. The second slowpath is generated by adding a few pv hooks to the normal slowpath, but where those will compile away for the native case, they expand into special wait/wake code for the pv version. The actual MCS queue can use extra storage in the mcs_nodes[] array to keep track of state and therefore uses directed wakeups. The head contender has no such storage directly visible to the unlocker. So the unlocker searches a hash table with open addressing using a simple binary Galois linear feedback shift register.
I am fine with the change as it makes it simpler. BTW, I just saw a build error mail. That should be fixed easily with some minor edit.
Cheers, Longman _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization