4.9-stable review patch. If anyone has any objections, please let me know. ------------------ From: Oleg Drokin <green@xxxxxxxxxxxxxx> commit cd15dd6ef4ea11df87f717b8b1b83aaa738ec8af upstream. I have been having a lot of unexplainable crashes in osc_lru_shrink lately that I could not see a good explanation for and then I found this patch that slip under the radar somehow that incorrectly converted while loop for lru list iteration into list_for_each_entry_safe totally ignoring that in the body of the loop we drop spinlocks guarding this list and move list entries around. Not sure why it was not showing up right away, perhaps some of the more recent LRU changes committed caused some extra pressure on this code that finally highlighted the breakage. Reverts: 8adddc36b1fc ("staging: lustre: osc: Use list_for_each_entry_safe") CC: Bhaktipriya Shridhar <bhaktipriya96@xxxxxxxxx> Signed-off-by: Oleg Drokin <green@xxxxxxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- drivers/staging/lustre/lustre/osc/osc_page.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) --- a/drivers/staging/lustre/lustre/osc/osc_page.c +++ b/drivers/staging/lustre/lustre/osc/osc_page.c @@ -542,7 +542,6 @@ long osc_lru_shrink(const struct lu_env struct cl_object *clobj = NULL; struct cl_page **pvec; struct osc_page *opg; - struct osc_page *temp; int maxscan = 0; long count = 0; int index = 0; @@ -569,13 +568,15 @@ long osc_lru_shrink(const struct lu_env spin_lock(&cli->cl_lru_list_lock); maxscan = min(target << 1, atomic_long_read(&cli->cl_lru_in_list)); - list_for_each_entry_safe(opg, temp, &cli->cl_lru_list, ops_lru) { + while (!list_empty(&cli->cl_lru_list)) { struct cl_page *page; bool will_free = false; if (--maxscan < 0) break; + opg = list_entry(cli->cl_lru_list.next, struct osc_page, + ops_lru); page = opg->ops_cl.cpl_page; if (lru_page_busy(cli, page)) { list_move_tail(&opg->ops_lru, &cli->cl_lru_list); -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html