在 2021/8/16 16:34, Greg Kroah-Hartman 写道: > On Mon, Aug 16, 2021 at 07:21:37AM +0000, Chen Huang wrote: >> From: Roman Gushchin <guro@xxxxxx> > > What is the git commit id of this patch in Linus's tree? > >> >> Patch series "mm: allow mapping accounted kernel pages to userspace", v6. >> >> Currently a non-slab kernel page which has been charged to a memory cgroup >> can't be mapped to userspace. The underlying reason is simple: PageKmemcg >> flag is defined as a page type (like buddy, offline, etc), so it takes a >> bit from a page->mapped counter. Pages with a type set can't be mapped to >> userspace. >> >> But in general the kmemcg flag has nothing to do with mapping to >> userspace. It only means that the page has been accounted by the page >> allocator, so it has to be properly uncharged on release. >> >> Some bpf maps are mapping the vmalloc-based memory to userspace, and their >> memory can't be accounted because of this implementation detail. >> >> This patchset removes this limitation by moving the PageKmemcg flag into >> one of the free bits of the page->mem_cgroup pointer. Also it formalizes >> accesses to the page->mem_cgroup and page->obj_cgroups using new helpers, >> adds several checks and removes a couple of obsolete functions. As the >> result the code became more robust with fewer open-coded bit tricks. >> >> This patch (of 4): >> >> Currently there are many open-coded reads of the page->mem_cgroup pointer, >> as well as a couple of read helpers, which are barely used. >> >> It creates an obstacle on a way to reuse some bits of the pointer for >> storing additional bits of information. In fact, we already do this for >> slab pages, where the last bit indicates that a pointer has an attached >> vector of objcg pointers instead of a regular memcg pointer. >> >> This commits uses 2 existing helpers and introduces a new helper to >> converts all read sides to calls of these helpers: >> struct mem_cgroup *page_memcg(struct page *page); >> struct mem_cgroup *page_memcg_rcu(struct page *page); >> struct mem_cgroup *page_memcg_check(struct page *page); >> >> page_memcg_check() is intended to be used in cases when the page can be a >> slab page and have a memcg pointer pointing at objcg vector. It does >> check the lowest bit, and if set, returns NULL. page_memcg() contains a >> VM_BUG_ON_PAGE() check for the page not being a slab page. >> >> To make sure nobody uses a direct access, struct page's >> mem_cgroup/obj_cgroups is converted to unsigned long memcg_data. >> >> Signed-off-by: Roman Gushchin <guro@xxxxxx> >> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> >> Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx> >> Reviewed-by: Shakeel Butt <shakeelb@xxxxxxxxxx> >> Acked-by: Johannes Weiner <hannes@xxxxxxxxxxx> >> Acked-by: Michal Hocko <mhocko@xxxxxxxx> >> Link: https://lkml.kernel.org/r/20201027001657.3398190-1-guro@xxxxxx >> Link: https://lkml.kernel.org/r/20201027001657.3398190-2-guro@xxxxxx >> Link: https://lore.kernel.org/bpf/20201201215900.3569844-2-guro@xxxxxx >> >> Conflicts: >> mm/memcontrol.c > > The "Conflicts:" lines should be removed. > > Please fix up the patch series and resubmit. But note, this seems > really intrusive, are you sure these are all needed? > OK,I will resend the patchset. Roman Gushchin's patchset formalize accesses to the page->mem_cgroup and page->obj_cgroups. But for LRU pages and most other raw memcg, they may pin to a memcg cgroup pointer, which should always point to an object cgroup pointer. That's the problem I met. And Muchun Song's patchset fix this. So I think these are all needed. > What UIO driver are you using that is showing problems like this? > The UIO driver is my own driver, and it's creation likes this: First, we register a device pdev = platform_device_register_simple("uio_driver,0, NULL, 0); and use uio_info to describe the UIO driver, the page is alloced and used for uio_vma_fault info->mem[0].addr = (phys_addr_t) kzalloc(PAGE_SIZE, GFP_ATOMIC); then we register the UIO driver. uio_register_device(&pdev->dev, info) Thanks! > thanks, > > greg k-h > > . >