On Tue, 6 Sept 2022 at 09:44, Oscar Salvador <osalvador@xxxxxxx> wrote: > > On Mon, Sep 05, 2022 at 02:57:50PM +0200, Marco Elver wrote: > > On Mon, Sep 05, 2022 at 05:10AM +0200, Oscar Salvador wrote: > > [...] > > > +int stack_depot_print_stacks_threshold(char *buf, size_t size, loff_t *pos) > > > > Can you add kernel-doc comment what this does (and also update > > accordingly in 3/3 when you add 'threshold'). > > Yes, I guess a kernel-doc comment is due. > > > From what I see it prints *all* stacks that have a non-zero count. > > Correct? > > That's right. > > > If so, should this be called stack_depot_print_all_count() (having > > stack(s) in the name twice doesn't make it more obvious what it does)? > > Then in the follow-up patch you add the 'threshold' arg. > > I guess so. The only reason I went with the actual name is that for me > "stack_depot" was kinda the name of the module/library, and > so I wanted to make crystal clear what were we printing. > > But I'm ok with renaming it if it's already self-explanatory I think it's clear from the fact we're using the stack depot that any printing will print stacks. To mirror the existing 'stack_depot_print()', I'd go with 'stack_depot_print_all_count()'. > > > +{ > > > + int i = *pos, ret = 0; > > > + struct stack_record **stacks, *stack; > > > + static struct stack_record *last = NULL; > > > + unsigned long stack_table_entries = stack_hash_mask + 1; > > > + > > > + /* Continue from the last stack if we have one */ > > > + if (last) { > > > + stack = last->next; > > > > This is dead code? > > No, more below. > > > Either I'm missing something really obvious, but I was able to simplify > > the above function to just this (untested!): > > > > int stack_depot_print_stacks_threshold(char *buf, size_t size, loff_t *pos) > > { > > const unsigned long stack_table_entries = stack_hash_mask + 1; > > > > /* Iterate over all tables for valid stacks. */ > > for (; *pos < stack_table_entries; (*pos)++) { > > for (struct stack_record *stack = stack_table[*pos]; stack; stack = stack->next) { > > if (!stack->size || stack->size < 0 || stack->size > size || > > stack->handle.valid != 1 || refcount_read(&stack->count) < 1) > > continue; > > > > return stack_trace_snprint(buf, size, stack->entries, stack->size, 0) + > > scnprintf(buf + ret, size - ret, "stack count: %d\n\n", > > refcount_read(&stack->count)); > > } > > } > > > > return 0; > > Yes, this will not work. > > You have stack_table[] which is an array for struct stacks, and each struct > stack has a pointer to its next stack which walks from the beginning fo a specific > table till the end. e.g: > > stack_table[0] = {stack1, stack2, stack3, ...} (each linked by ->next) > stack_table[1] = {stack1, stack2, stack3, ...} (each linked by ->next) > .. > stack_table[stack_table_entries - 1] = {stack1, stack2, stack3, ...} (each linked by ->next) > > *pos holds the index of stack_table[], while "last" holds the last stack within > the table we were processing. > > So, when we find a valid stack to print, set "last" to that stack, and *pos to the index > of stack_table. > So, when we call stack_depot_print_stacks_threshold() again, we set "stack" to "last"->next, > and we are ready to keep looking with: > > for (; stack; stack = stack->next) { > ... > check if stack is valid > } > > Should not we find any more valid stacks in that stack_table, we need to check in > the next table, so we do:: > > i++; (note that i was set to *pos at the beginning of the function) > *pos = i; > last = NULL; > goto new_table > > and now are ready to do: > > new_table: > stacks = &stack_table[i]; > stack = (struct stack_record *)stacks; > > > Does this clarify it a little bit? > > About using static vs non-static. > In the v1, I was using a parameter which contained last_stack: > > https://patchwork.kernel.org/project/linux-mm/patch/20220901044249.4624-3-osalvador@xxxxxxx/ > > Not sure if that's better? Thoughts? Moderately better, but still not great. Essentially you need 2 cursors, but with loff_t you only get 1. I think the loff_t parameter can be used to encode both cursors. In the kernel, loff_t is always 'long long', so it'll always be 64-bit. Let's assume that collisions in the hash table are rare, so the number of stacks per bucket are typically small. Then you can encode the index into the bucket in bits 0-31 and the bucket index in bits 32-63. STACK_HASH_ORDER_MAX is 20, so 32 bits is plenty to encode the index.