Would like to attend the upcoming summit and would be interested in participating in the large memory discussions (including NVRAM, DAX) as well as improvements in huge page support (pagecache, easy configurability, consistency over multiple sizes of huge pages etc etc) Also an important subject matter would be to investigate ways to improve I/O throughput from memory for large scale datasets (1TB or higher). Maybe this straddles a bit into the FS part too. Recently stumbled over another way to avoid fragmentation by reserving certain numbers of sizes of each page order. This seems to be deployed at a large ISP for years now and working out ok. Maybe another stab at the problem of availability of higher. Would like to discuss if this approach could be upstreamed. Then I'd like to continue explore ways to avoid fragmentation like movable objects in slab caches (see the xarray implementation for example). Coming up with an inode/dentry targeted reclaim/move approach would also be interesting in particular since these already have _isolate_ functions and are akin to the early steps in page migration where the focused on targeted reclaim (and then reloading the page from swap) to simplify the approach rather than making page actually movable. There are numerous other issues with large memory and throughput of extreme HPC loads that my coworkers are currently running into. Would be good to share experiences and figure out ways to address these. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>