On Tue, May 28, 2019 at 4:56 AM Michal Hocko <mhocko@xxxxxxxxxx> wrote: > > On Tue 28-05-19 04:42:47, Daniel Colascione wrote: > > On Tue, May 28, 2019 at 4:28 AM Michal Hocko <mhocko@xxxxxxxxxx> wrote: > > > > > > On Tue 28-05-19 20:12:08, Minchan Kim wrote: > > > > On Tue, May 28, 2019 at 12:41:17PM +0200, Michal Hocko wrote: > > > > > On Tue 28-05-19 19:32:56, Minchan Kim wrote: > > > > > > On Tue, May 28, 2019 at 11:08:21AM +0200, Michal Hocko wrote: > > > > > > > On Tue 28-05-19 17:49:27, Minchan Kim wrote: > > > > > > > > On Tue, May 28, 2019 at 01:31:13AM -0700, Daniel Colascione wrote: > > > > > > > > > On Tue, May 28, 2019 at 1:14 AM Minchan Kim <minchan@xxxxxxxxxx> wrote: > > > > > > > > > > if we went with the per vma fd approach then you would get this > > > > > > > > > > > feature automatically because map_files would refer to file backed > > > > > > > > > > > mappings while map_anon could refer only to anonymous mappings. > > > > > > > > > > > > > > > > > > > > The reason to add such filter option is to avoid the parsing overhead > > > > > > > > > > so map_anon wouldn't be helpful. > > > > > > > > > > > > > > > > > > Without chiming on whether the filter option is a good idea, I'd like > > > > > > > > > to suggest that providing an efficient binary interfaces for pulling > > > > > > > > > memory map information out of processes. Some single-system-call > > > > > > > > > method for retrieving a binary snapshot of a process's address space > > > > > > > > > complete with attributes (selectable, like statx?) for each VMA would > > > > > > > > > reduce complexity and increase performance in a variety of areas, > > > > > > > > > e.g., Android memory map debugging commands. > > > > > > > > > > > > > > > > I agree it's the best we can get *generally*. > > > > > > > > Michal, any opinion? > > > > > > > > > > > > > > I am not really sure this is directly related. I think the primary > > > > > > > question that we have to sort out first is whether we want to have > > > > > > > the remote madvise call process or vma fd based. This is an important > > > > > > > distinction wrt. usability. I have only seen pid vs. pidfd discussions > > > > > > > so far unfortunately. > > > > > > > > > > > > With current usecase, it's per-process API with distinguishable anon/file > > > > > > but thought it could be easily extended later for each address range > > > > > > operation as userspace getting smarter with more information. > > > > > > > > > > Never design user API based on a single usecase, please. The "easily > > > > > extended" part is by far not clear to me TBH. As I've already mentioned > > > > > several times, the synchronization model has to be thought through > > > > > carefuly before a remote process address range operation can be > > > > > implemented. > > > > > > > > I agree with you that we shouldn't design API on single usecase but what > > > > you are concerning is actually not our usecase because we are resilient > > > > with the race since MADV_COLD|PAGEOUT is not destruptive. > > > > Actually, many hints are already racy in that the upcoming pattern would > > > > be different with the behavior you thought at the moment. > > > > > > How come they are racy wrt address ranges? You would have to be in > > > multithreaded environment and then the onus of synchronization is on > > > threads. That model is quite clear. But we are talking about separate > > > processes and some of them might be even not aware of an external entity > > > tweaking their address space. > > > > I don't think the difference between a thread and a process matters in > > this context. Threads race on address space operations all the time > > --- in the sense that multiple threads modify a process's address > > space without synchronization. > > I would disagree. They do have in-kernel synchronization as long as they > do not use MAP_FIXED. If they do want to use MAP_FIXED then they better > synchronize or the result is undefined. Right. It's because the kernel hands off different regions to different non-MAP_FIXED mmap callers that it's pretty easy for threads to mind their own business, but they're all still using the same address space. > > From a synchronization point > > of view, it doesn't really matter whether it's a thread within the > > target process or a thread outside the target process that does the > > address space manipulation. What's new is the inspection of the > > address space before performing an operation. > > The fundamental difference is that if you want to achieve the same > inside the process then your application is inherenly aware of the > operation and use whatever synchronization is needed to achieve a > consistency. As soon as you allow the same from outside you either > have to have an aware target application as well or you need a mechanism > to find out that your decision has been invalidated by a later > unsynchronized action. I thought of this objection immediately after I hit send. :-) I still don't think the intra- vs inter-process difference matters. It's true that threads can synchronize with each other, but different processes can synchronize with each other too. I mean, you *could* use sem_open(3) for your heap lock and open the semaphore from two different processes. That's silly, but it'd work. The important requirement, I think, is that we need to support managing "memory-naive" uncooperative tasks (perhaps legacy ones written before cross-process memory management even became possible), and I think that the cooperative-vs-uncooperative distinction matters a lot more than the tgid of the thread doing the memory manipulation. (Although in our case, we really do need a separate tgid. :-))