> -----Original Message----- > From: Michal Hocko [mailto:mhocko@xxxxxxx] > Sent: Tuesday, July 23, 2013 11:10 AM > To: KY Srinivasan > Cc: gregkh@xxxxxxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; > devel@xxxxxxxxxxxxxxxxxxxxxx; olaf@xxxxxxxxx; apw@xxxxxxxxxxxxx; > andi@xxxxxxxxxxxxxx; akpm@xxxxxxxxxxxxxxxxxxxx; linux-mm@xxxxxxxxx; > kamezawa.hiroyuki@xxxxxxxxx; hannes@xxxxxxxxxxx; yinghan@xxxxxxxxxx; > jasowang@xxxxxxxxxx; kay@xxxxxxxx > Subject: Re: [PATCH 1/1] Drivers: base: memory: Export symbols for onlining > memory blocks > > On Tue 23-07-13 14:52:36, KY Srinivasan wrote: > > > > > > > -----Original Message----- > > > From: Michal Hocko [mailto:mhocko@xxxxxxx] > > > Sent: Monday, July 22, 2013 8:37 AM > > > To: KY Srinivasan > > > Cc: gregkh@xxxxxxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; > > > devel@xxxxxxxxxxxxxxxxxxxxxx; olaf@xxxxxxxxx; apw@xxxxxxxxxxxxx; > > > andi@xxxxxxxxxxxxxx; akpm@xxxxxxxxxxxxxxxxxxxx; linux-mm@xxxxxxxxx; > > > kamezawa.hiroyuki@xxxxxxxxx; hannes@xxxxxxxxxxx; > yinghan@xxxxxxxxxx; > > > jasowang@xxxxxxxxxx; kay@xxxxxxxx > > > Subject: Re: [PATCH 1/1] Drivers: base: memory: Export symbols for onlining > > > memory blocks > > > > > > On Fri 19-07-13 12:23:05, K. Y. Srinivasan wrote: > > > > The current machinery for hot-adding memory requires having udev > > > > rules to bring the memory segments online. Export the necessary > functionality > > > > to to bring the memory segment online without involving user space code. > > > > > > Why? Who is going to use it and for what purpose? > > > If you need to do it from the kernel cannot you use usermod helper > > > thread? > > > > > > Besides that this is far from being complete. memory_block_change_state > > > seems to depend on device_hotplug_lock and find_memory_block is > > > currently called with mem_sysfs_mutex held. None of them is exported > > > AFAICS. > > > > You are right; not all of the required symbols are exported (yet). Let > > me answer your other questions first: > > > > The Hyper-V balloon driver can use this functionality. I have > > prototyped the in-kernel "onlining" of hot added memory without > > requiring any help from user level code that performs significantly > > better than having user level code involved in the hot add process. > > What does significantly better mean here? Less failures than before. > > > With this change, I am able to successfully hot-add and online the > > hot-added memory even under extreme memory pressure which is what you > > would want given that we are hot-adding memory to alleviate memory > > pressure. The current scheme of involving user level code to close > > this loop obviously does not perform well under high memory pressure. > > Hmm, this is really unexpected. Why the high memory pressure matters > here? Userspace only need to access sysfs file and echo a simple string > into a file. The reset is same regardless you do it from the userspace. Could it be that we could fail to launch the user-space thread. The host presents a large chunk of memory for "hot adding". Within the guest, I break this up and hot-add 128MB chunks; as I loop through this process, I wait for the onlining to occur before proceeding with the next hotadd operation. With user space code involved in the onlining process, I would frequently timeout waiting for onlining to complete (under high memory load). After I switched over to not involving the user space code, this problem does not exist since onlining is done "in context". > > > I can, if you prefer export all of the necessary functionality in one > > patch. > > If this turns out really a valid use case then I would prefer exporting > a high level function which would hide all the locking and direct > manipulation with mem blocks. I will take a crack at defining wrappers to hide some of the details. I will also post the Hyper-V balloon driver patch that uses this functionality. Regards, K. Y _______________________________________________ devel mailing list devel@xxxxxxxxxxxxxxxxxxxxxx http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel