On Sat, 27 Nov 2010, Jan Engelhardt wrote: > On Saturday 2010-11-27 10:06, Jozsef Kadlecsik wrote: > > > >a suspended process could cause any problem, we faced it already. > > * let there be a list of 6 entries of data, ABCDEF > * proc A starts a dump and reads, say, 1 of 10 entries (A) > * proc B adds a new entry at the start of the list, Z, > and deletes an entry, A > (and does these two actions atomically) > * proc A reads the rest [...] > When the kernel dumps however, and the skb is full, and it returns to > userspace, no rcu and no mutex may be held, which gives away the > guarantee of an atomic view. The chain might go away inbetween, > unless I hold the writer lock. To hold the lock across the entire > dump would require some semaphore, and it does not seem like a good > idea to block users across returns to userspace either. AFAIK when the kernel dumps and the skb is full, it's not returned directly to the userspace but first enqueued. And if the userspace listener is too slow/not ready to receive the netlink messages from the queue, then the queue can get full and messages will be lost. So I think the steps are: * let there be a list of 6 entries of data, ABCDEF * proc A starts a dump, and kernel enqueues the messages, which cover all entries. From kernel point of view the dumping is done. At the same time Proc A is receiving the messages from the queue... * proc B adds a new entry at the start of the list, Z, and deletes an entry, A (and does these two actions atomically) * ...proc A reads the rest from the queue. If messages are not lost due to the slow userpsace handling, then the received state is correct and corresponds to the one when the dump was initiated. Best regards, Jozsef - E-mail : kadlec@xxxxxxxxxxxxxxxxx, kadlec@xxxxxxxxxxxx PGP key : http://www.kfki.hu/~kadlec/pgp_public_key.txt Address : KFKI Research Institute for Particle and Nuclear Physics H-1525 Budapest 114, POB. 49, Hungary -- To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html