On Fri, Jun 11, 2010 at 09:06:01AM -0700, Linus Torvalds wrote: > On Fri, Jun 11, 2010 at 7:50 AM, Nick Piggin <npiggin@xxxxxxx> wrote: > > Not sure if this is really the _cleanest_ way to fix it. But open coding > > the list walking is a bit annoying too. And I couldn't see any real way to > > make the list macro safe. Better ideas? > > I really think we should open-code the list walking instead. You > basically already are doing that, and in a very non-obvious way too > (ie you are mixing the non-open-coded list walker with also explicitly > playing with the internal variable for that magic walker. > > So I would get rid of the 'list_for_each_entry_safe' entirely, and > replace it with something like > > struct list_head *list; > > spin_lock(&sb_lock); > list = super_blocks->next; > while (list != &super_blocks) { > struct super_block *sb = list_entry(next, struct super_block, s_list); > list = list->next; > > if (list_empty(&sb->s_instances)) > continue; > > if (!sb->s_nr_dentry_unused) > continue; > > sb->s_count++; > spin_unlock(&sb_lock); > > .... whatever ... > > spin_lock(&sb_lock); > /* We dropped the lock, need to re-load the next list entry */ > list = sb->s_list.next; > __put_super(sb); > } Yeah I do agree really. I guess the bug came about in the first place because it's easy to overlook where the memory accesses happen. > which isn't that much more complicated, now is it? Sure, it's > open-coded, but at least it doesn't play games. And being open-coded, > it's a lot more honest about the issue. Maybe even add a comment > saying "we can't use the list_for_each[_safe]() macro, because we > don't hold the lock and we're not the only ones that may delete > things" explaining _why_ it's open-coded. > > I dunno. Maybe Al disagrees. I just don't like using the "simple > helpers" and then changing subtly how they work by knowing their > internals. I'll respin the patch and we'll see. -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html