Memory policy change handling and callbacks for a new HW feature

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We are trying to see how to adapt a new hardware feature (of our) with NUMA.

This feature takes ownership of an area of kernel memory which, in NUMA 
machine, is spread among the different nodes; one in each node. This means 
with two nodes -> two areas, four nodes -> four areas...

These areas are managed by a kernel driver.

A user application can make use of this HW feature by allocating (via the 
driver by mean of IO_CTRL) a buffer from these areas.

In this case we would have (at the application level) a vma mapped to the 
just reserved buffer.

We want to associate these buffers to the right node according to the 
memory policy.

The 'mapped vma to the buffer' design is very convenient as changing the 
memory policy of the buffer address range (mbind API) 

causes a calls to the driver via vm_operations_struct.set_policy callback 
and we can then take the right action; moving buffer to other nodes.



A problem occurs when the policy is being changed at the process scope, for 
example numa_set_membind() API call. In this case we do not get any 
callback and I do not see how we can handle this policy change when it 
happens.

We registered to vm_operations_struct.migrate callback, but I could not see 
any invocation of it either.



Does anyone have a suggestion about the right way to handle this?



Thanks,

Serge

--
To unsubscribe from this list: send the line "unsubscribe linux-numa" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]     [Devices]

  Powered by Linux