On 6/21/21 6:14 PM, Huang, Ying wrote: >> In our P9+volta system, GPU memory is exposed as a NUMA node. >> For the GPU workloads with data size greater than GPU memory size, >> it will be very helpful to allow pages in GPU memory to be migrated/demoted >> to CPU memory. With your current assumption, GPU memory -> CPU memory >> demotion seems not possible, right? This should also apply to any >> system with a device memory exposed as a NUMA node and workloads running >> on the device and using CPU memory as a lower tier memory than the device >> memory. > Thanks a lot for your use case! It appears that the demotion path > specified by users is one possible way to satisfy your requirement. And > I think it's possible to enable that on top of this patchset. But we > still have no specific plan to work on that at least for now. In other words, patches to make adapt this to your use case would be most welcome!