On Wed, 26 Aug 2009, Lee Schermerhorn wrote: > > I think it would probably be better to use the generic NODEMASK_ALLOC() > > interface by requiring it to pass the entire type (including "struct") as > > part of the first parameter. Then it automatically takes care of > > dynamically allocating large nodemasks vs. allocating them on the stack. > > > > Would it work by redefining NODEMASK_ALLOC() in the NODES_SHIFT > 8 case > > to be this: > > > > #define NODEMASK_ALLOC(x, m) x *m = kmalloc(sizeof(*m), GFP_KERNEL); > > > > and converting NODEMASK_SCRATCH(x) to NODEMASK_ALLOC(struct > > nodemask_scratch, x), and then doing this in your code: > > > > NODEMASK_ALLOC(nodemask_t, nodes_allowed); > > if (nodes_allowed) > > *nodes_allowed = nodemask_of_node(node); > > > > The NODEMASK_{ALLOC,SCRATCH}() interface is in its infancy so it can > > probably be made more general to handle cases like this. > > I just don't know what that would accomplish. Heck, I'm not all that > happy with the alloc_nodemask_from_node() because it's allocating both a > hidden nodemask_t and a pointer thereto on the stack just to return a > pointer to a kmalloc()ed nodemask_t--which is what I want/need here. > > One issue I have with NODEMASK_ALLOC() [and nodemask_of_node(), et al] > is that it declares the pointer variable as well as initializing it, > perhaps with kmalloc(), ... Indeed, it's purpose is to replace on > stack nodemask declarations. > Right, which is why I suggest we only have one such interface to dynamically allocate nodemasks when NODES_SHIFT > 8. That's what defines NODEMASK_ALLOC() as being special: it's taking NODES_SHIFT into consideration just like CPUMASK_ALLOC() would take NR_CPUS into consideration. Your use case is the intended purpose of NODEMASK_ALLOC() and I see no reason why your code can't use the same interface with some modification and it's in the best interest of a maintainability to not duplicate specialized cases where pre-existing interfaces can be used (or improved, in this case). > So, to use it at the start of, e.g., set_max_huge_pages() where I can > safely use it throughout the function, I'll end up allocating the > nodes_allowed mask on every call, whether or not a node is specified or > there is a non-default mempolicy. If it turns out that no node was > specified and we have default policy, we need to free the mask and NULL > out nodes_allowed up front so that we get default behavior. That seems > uglier to me that only allocating the nodemask when we know we need one. > Not with my suggested code of disabling local irqs, getting a reference to the mempolicy so it can't be freed, reenabling, and then only using NODEMASK_ALLOC() in the switch statement on mpol->mode for MPOL_PREFERRED. > I'm not opposed to using a generic function/macro where one exists that > suits my purposes. I just don't see one. I tried to create > one--alloc_nodemask_from_node(), and to keep Mel happy, I tried to reuse > nodemask_from_node() to initialize it. I'm really not happy with the > results--because of those extra, hidden stack variables. I could > eliminate those by creating a out of line function, but there's no good > place to put a generic nodemask function--no nodemask.c. > Using NODEMASK_ALLOC(nodes_allowed) wouldn't really be a hidden stack variable, would it? I think most developers would assume that it is some automatic variable called `nodes_allowed' since it's later referenced (and only needs to be in the case of MPOL_PREFERRED if my mpol_get() solution with disabled local irqs is used). -- To unsubscribe from this list: send the line "unsubscribe linux-numa" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html