Hi Sultan, On Mon, Mar 11, 2019 at 10:58 AM Sultan Alsawaf <sultan@xxxxxxxxxxxxxxx> wrote: > > On Mon, Mar 11, 2019 at 06:43:20PM +0100, Michal Hocko wrote: > > I am sorry but we are not going to maintain two different OOM > > implementations in the kernel. From a quick look the implementation is > > quite a hack which is not really suitable for anything but a very > > specific usecase. E.g. reusing a freed page for a waiting allocation > > sounds like an interesting idea but it doesn't really work for many > > reasons. E.g. any NUMA affinity is broken, zone protection doesn't work > > either. Not to mention how the code hooks into the allocator hot paths. > > This is simply no no. > > > > Last but not least people have worked really hard to provide means (PSI) > > to do what you need in the userspace. > > Hi Michal, > > Thanks for the feedback. I had no doubt that this would be vehemently rejected > on the mailing list, but I wanted feedback/opinions on it and thus sent it as anRFC. Thanks for the proposal. I think Michal and Joel already answered why in-kernel LMK will not be accepted and that was one of the reasons the lowmemorykiller driver was removed in 4.12. > At best I thought perhaps the mechanisms I've employed might serve as > inspiration for LMKD improvements in Android, since this hacky OOM killer I've > devised does work quite well for the very specific usecase it is set out to > address. The NUMA affinity and zone protection bits are helpful insights too. The idea seems interesting although I need to think about this a bit more. Killing processes based on failed page allocation might backfire during transient spikes in memory usage. AFAIKT the biggest issue with using this approach in userspace is that it's not practically implementable without heavy in-kernel support. How to implement such interaction between kernel and userspace would be an interesting discussion which I would be happy to participate in. > I'll take a look at PSI which Joel mentioned as well. > > Thanks, > Sultan Alsawaf Thanks, Suren. _______________________________________________ devel mailing list devel@xxxxxxxxxxxxxxxxxxxxxx http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel