On Fri, Feb 24, 2012 at 9:44 PM, KOSAKI Motohiro <kosaki.motohiro@xxxxxxxxx> wrote: > Oh, maybe generically you are right. but you missed one thing. Before > your patch, stack or not stack are address space property. thus, using > /proc/pid/maps makes sense. but after your patch, it's no longer memory > property. applications can use heap or mapped file as a stack. then, at > least, current your code is wrong. the code assume each memory property > are exclusive. Right, but I cannot think of any other alternative that does not involve touching some sensitive code. The setcontext family of functions where any heap, stack or even data area portion could be used as stack, break the very concept of an entire vma being used as a stack. In such a scenario the kernel can only show what it knows, which is that a specific vma is being used as a stack. I agree that it is not correct to show the entire vma as stack, but there doesn't seem to be a better way right now in that implementation. FWIW, if the stack space is allocated in heap, it will show up as heap and not stack since the former gets preference. > Moreover, if pthread stack is unimportant, I wonder why we need this patch > at all. Which application does need it? and When? Right, my original intent was to mark stack vmas allocated by pthreads, which included those vmas that are in the pthreads cache. However, this means that the kernel does not have any real control over what it marks as stack. Also, the concept is very specific to the glibc pthreads implementation and we're essentially just making the kernel spit out some data blindly for glibc. The other solution I can think of is to have stack_start as a task level property so that each task knows their stack vma start (obtained from its sys_clone call and not from mmap). This however means increasing the size of task_struct by sizeof(unsigned long). Is that overhead acceptable? -- Siddhesh Poyarekar http://siddhesh.in -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html