On Monday 18 January 2010, Benjamin Herrenschmidt wrote: > On Mon, 2010-01-18 at 00:00 +0100, Rafael J. Wysocki wrote: > > On Sunday 17 January 2010, Benjamin Herrenschmidt wrote: > > > On Sun, 2010-01-17 at 14:27 +0100, Rafael J. Wysocki wrote: > > ... > > > However, it's hard to deal with the case of allocations that have > > > already started waiting for IOs. It might be possible to have some VM > > > hook to make them wakeup, re-evaluate the situation and get out of that > > > code path but in any case it would be tricky. > > > > In the second version of the patch I used an rwsem that made us wait for these > > allocations to complete before we changed gfp_allowed_mask. > > > > [This is kinda buggy in the version I sent, but I'm going to send an update > > in a minute.] > > And nobody screamed due to cache line ping pong caused by this in the > fast path ? :-) Apparently not. :-) > We might want to look at something a bit smarter for that sort of > read-mostly-really-really-mostly construct, though in this case I don't > think RCU is the answer since we are happily scheduling. > > I wonder if something per-cpu would do, it's thus the responsibility of > the "writer" to take them all in order for all CPUs. I think I'll get back to the first version of the patch which I think is not going to have side effects (as long as no one will change gfp_allowed_mask in parallel with suspend/resume), for now. We can add more complicated things on top of it, then. Rafael _______________________________________________ linux-pm mailing list linux-pm@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linux-foundation.org/mailman/listinfo/linux-pm