I'm not pouring over Love's book in detail and the section in Chapter 4 on the wit queue is implemented in the text completely surprised me. He is recommending that you have to right your own wait queue entry routine for every process? Isn't that reckless? He is suggesting DEFINE_WAIT(wait) //what IS wait add_wait_queue(q, &wait); // in the current kernel this invovled // flag checking and a linked list while(!condition){ /* an event we are weighting for prepare_to_wait(&q, &wait, TASK_INTERRUPTIBLE); if(signal_pending(current)) /* SIGNAl HANDLE */ schedule(); } finish_wait(&q, &wait); He also write how this proceeds to function and one part confuses me 5. When the taks awakens, it again checks whether the condition is true. If it is, it exists the loop. Otherwise it again calls schedule. This is not the order that it seems to follow according to the code. To me it looks like it should 1 - creat2 the wait queue 2 - adds &wait onto queue q 3 checks if condition is true, if so, if not, enter a while loop 4 prepare_to_wait which changes the status of our &wait to TASK_INTERUPPABLE 5 check for signals ... notice the process is still moving. Does it stop and wait now? 6 schedule itself on the runtime rbtree... which make NO sense unless there was a stopage I didn't know about. 7 check the condition again and repeat while look 7a. if the loop ends fishish_waiting... take it off the queue. Isn't this reckless to leave this to users to write the code. Your begging for a race condition. Ruben _______________________________________________ Kernelnewbies mailing list Kernelnewbies@xxxxxxxxxxxxxxxxx http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies