> > Hi Rafael, > > > > Do you mean this is the unrelated issue of nVidia bug? > > The nvidia driver _is_ buggy, but Maxim said he couldn't reproduce the > problem if all the allocations made by the nvidia driver during suspend > were changed to GFP_ATOMIC. > > > Probably I haven't catch your point. I don't find Maxim's original bug > > report. Can we share the test-case and your analysis detail? > > The Maxim's original report is here: > https://lists.linux-foundation.org/pipermail/linux-pm/2010-January/023982.html > > and the message I'm referring to is at: > https://lists.linux-foundation.org/pipermail/linux-pm/2010-January/023990.html Hmmm... Usually, Increasing I/O isn't caused MM change. either subsystem change memory alloc/free pattern and another subsystem receive such effect ;) I don't think this message indicate MM fault. And, 2.6.33 MM change is not much. if the fault is in MM change (note: my guess is no), The most doubtful patch is my "killing shrink_all_zones" patch. If old shrink_all_zones reclaimed memory much rather than required. The patch fixed it. IOW, the patch can reduce available free memory to be used buggy .suspend of the driver. but I don't think it is MM fault. As I said, drivers can't use memory freely as their demand in suspend method. It's obvious. They should stop such unrealistic assumption. but How should we fix this? - Gurantee suspend I/O device at last? - Make much much free memory before calling .suspend method? even though typical drivers don't need. - Ask all drivers how much they require memory before starting suspend and Make enough free memory at first? - Or, do we have an alternative way? Probably we have multiple option. but I don't think GFP_NOIO is good option. It assume the system have lots non-dirty cache memory and it isn't guranteed. _______________________________________________ linux-pm mailing list linux-pm@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linux-foundation.org/mailman/listinfo/linux-pm