On Mon, Feb 03, 2020 at 03:09:37PM +0100, Jan Kara wrote: > Hello Matthew! > > Lately I've been looking into speeding up page cache truncation that got > slowed down by the conversion of page cache to xarray as we spoke about > back in February / March [1]. Now I have relatively simple patch giving me > around 6% improvement in truncation speeds on my test machine but when > testing it and debugging issues, I've found out that current xarray tagged > iteration is racy: > > TASK1 TASK2 > page_cache_delete() find_get_pages_range_tag() > xas_for_each_marked() > xas_find_marked() > off = xas_find_chunk() > > xas_store(&xas, NULL) > xas_init_marks(&xas); > ... > rcu_assign_pointer(*slot, NULL); > entry = xa_entry(off); > > So xas_for_each_marked() can return NULL entries as tagged thus aborting > xas_for_each_marked() iteration prematurely (data loss possible). > > Now I have a patch to change xas_for_each_marked() to not get confused by > NULL entries (because that is IMO a fragile design anyway and easy to avoid > AFAICT) but that still leaves us with find_get_pages_range_tag() getting > NULL as tagged entry and that causes oops there. > > I see two options how to fix this and I'm not quite decided which is > better: > > 1) Just add NULL checking to find_get_pages_range_tag() similarly to how it > currently checks xa_is_value(). Quick grepping seems to show that that > place is the only place that uses tagged iteration under RCU. It is cheap > but kind of ugly. > > 2) Make sure xas_find_marked() and xas_next_marked() do recheck marks after > loading the entry. This is more convenient for the callers but potentially > more expensive since we'd have to add some barriers there. > > What's your opinion? I'm leaning more towards 1) but I'm not completely > decided... Thanks for debugging that! This must've been the problem I was hitting when I originally tried to solve that problem. I prefer a third choice ... continue to iterate forward if we find a NULL entry that used to have a tag set on it. That should be cheap.