Dear Mr. Dmitry , Thanks a lot for your suggestions. > + spin_lock_irq(&dev->event_lock); > + spin_lock(&client->buffer_lock); > + spin_unlock(&dev->event_lock); Umm, why? Yes, there is no need of event_lock as we are modifying client specific data structure only. Hence only buffer_lock will guarantee atomicity for flushing of client pending events buffer. So i will modify locking mechanism to use buffer_lock only. > + > + /* Flush clients events after clk_type is changed > + * and queue SYN_DROPPED event.*/ > + client->packet_head = client->head = client->tail; > + spin_unlock_irq(&client->buffer_lock); > + > + evdev_queue_syn_dropped(client); This is still racy. I'd rather we passed a flag to evdev_queue_syn_dropped() to indicate it should also clear queue. Can you please tell me in which scenario's this patch is prone to race condition's? As i think we are modifying the client's buffer indexes so buffer_lock would be sufficient. Yes by adding one more parameter in evdev_queue_syn_dropped function on the basis of which we can flush the buffer. Example :: static void evdev_queue_syn_dropped(struct evdev_client *client) It can be changed to static void evdev_queue_syn_dropped(struct evdev_client *client , bool flush) { spin_lock(buffer_lock); if(flush) client->packet_head = client->head = client->tail; ......... spin_unlock(buffer_lock); } OR Similarly we can extend__evdev_flush_queue function to support flushing of client event queue. As currently this function flushes single type of events only. I think 2nd way is better. Please give your insignt on above suggested changes. Thanks Anshul Garg -- To unsubscribe from this list: send the line "unsubscribe linux-input" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html