Dear All, I have seen an oops in the 2.4.18 kernel which I think may also be apparent in the 2.4.21 kernel and was wondering if anyone else had seen it. I have back traced the oops to the following section of code highlighed with the text SC inline__ void __tcp_put_port(struct sock *sk) { struct tcp_bind_hashbucket *head = &tcp_bhash[tcp_bhashfn(sk->num)]; struct tcp_bind_bucket *tb; spin_lock(&head->lock); tb = (struct tcp_bind_bucket *) sk->prev; if (sk->bind_next) sk->bind_next->bind_pprev = sk->bind_pprev; *(sk->bind_pprev) = sk->bind_next; sk->prev = NULL; sk->num = 0; if (tb->owners == NULL) { if (tb->next) tb->next->pprev = tb->pprev; *(tb->pprev) = tb->next; // SC PANIC HERE kmem_cache_free(tcp_bucket_cachep, tb); } spin_unlock(&head->lock); } What guarantees that tb->pprev will never be NULL circumstances ? Having looked in some more detail at this, I was wondering about the following code in the same file, look for SC for the text which highlights the concern. static int tcp_v4_destroy_sock(struct sock *sk) { struct tcp_opt *tp = &(sk->tp_pinfo.af_tcp); tcp_clear_xmit_timers(sk); /* Cleanup up the write buffer. */ tcp_writequeue_purge(sk); /* Cleans up our, hopefully empty, out_of_order_queue. */ __skb_queue_purge(&tp->out_of_order_queue); /* Clean prequeue, it must be empty really */ __skb_queue_purge(&tp->ucopy.prequeue); /* Clean up a referenced TCP bind bucket. */ // SC sk->prev is not being accessed with the local_bh_disable calls // SC does this mean we have a potential concurrent access issue ? // SC should we repeat the test in tcp_put_port before calling __tcp_put_port if(sk->prev != NULL) tcp_put_port(sk); /* If sendmsg cached page exists, toss it. */ if (tp->sndmsg_page != NULL) __free_page(tp->sndmsg_page); atomic_dec(&tcp_sockets_allocated); return 0; } Simon - : send the line "unsubscribe linux-net" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html