Hello Matthew! Lately I've been looking into speeding up page cache truncation that got slowed down by the conversion of page cache to xarray as we spoke about back in February / March [1]. Now I have relatively simple patch giving me around 6% improvement in truncation speeds on my test machine but when testing it and debugging issues, I've found out that current xarray tagged iteration is racy: TASK1 TASK2 page_cache_delete() find_get_pages_range_tag() xas_for_each_marked() xas_find_marked() off = xas_find_chunk() xas_store(&xas, NULL) xas_init_marks(&xas); ... rcu_assign_pointer(*slot, NULL); entry = xa_entry(off); So xas_for_each_marked() can return NULL entries as tagged thus aborting xas_for_each_marked() iteration prematurely (data loss possible). Now I have a patch to change xas_for_each_marked() to not get confused by NULL entries (because that is IMO a fragile design anyway and easy to avoid AFAICT) but that still leaves us with find_get_pages_range_tag() getting NULL as tagged entry and that causes oops there. I see two options how to fix this and I'm not quite decided which is better: 1) Just add NULL checking to find_get_pages_range_tag() similarly to how it currently checks xa_is_value(). Quick grepping seems to show that that place is the only place that uses tagged iteration under RCU. It is cheap but kind of ugly. 2) Make sure xas_find_marked() and xas_next_marked() do recheck marks after loading the entry. This is more convenient for the callers but potentially more expensive since we'd have to add some barriers there. What's your opinion? I'm leaning more towards 1) but I'm not completely decided... Honza [1] https://lore.kernel.org/linux-mm/20190226165628.GB24711@xxxxxxxxxxxxxx -- Jan Kara <jack@xxxxxxxx> SUSE Labs, CR