Ryan Roberts <ryan.roberts@xxxxxxx> writes: > On 18/10/2023 07:55, Huang, Ying wrote: >> Ryan Roberts <ryan.roberts@xxxxxxx> writes: >> [snip] >>> diff --git a/include/linux/swap.h b/include/linux/swap.h >>> index a073366a227c..35cbbe6509a9 100644 >>> --- a/include/linux/swap.h >>> +++ b/include/linux/swap.h >>> @@ -268,6 +268,12 @@ struct swap_cluster_info { >>> struct percpu_cluster { >>> struct swap_cluster_info index; /* Current cluster index */ >>> unsigned int next; /* Likely next allocation offset */ >>> + unsigned int large_next[]; /* >>> + * next free offset within current >>> + * allocation cluster for large folios, >>> + * or UINT_MAX if no current cluster. >>> + * Index is (order - 1). >>> + */ >>> }; >>> >>> struct swap_cluster_list { >>> diff --git a/mm/swapfile.c b/mm/swapfile.c >>> index b83ad77e04c0..625964e53c22 100644 >>> --- a/mm/swapfile.c >>> +++ b/mm/swapfile.c >>> @@ -987,35 +987,70 @@ static int scan_swap_map_slots(struct swap_info_struct *si, >>> return n_ret; >>> } >>> >>> -static int swap_alloc_cluster(struct swap_info_struct *si, swp_entry_t *slot) >>> +static int swap_alloc_large(struct swap_info_struct *si, swp_entry_t *slot, >>> + unsigned int nr_pages) >> >> This looks hacky. IMO, we should put the allocation logic inside >> percpu_cluster framework. If percpu_cluster framework doesn't work for >> you, just refactor it firstly. > > I'm not sure I really understand what you are suggesting - could you elaborate? > What "framework"? I only see a per-cpu data structure and > scan_swap_map_try_ssd_cluster(), which is very much geared towards order-0 > allocations. I suggest to share as much code as possible between order-0 and order > 0 swap entry allocation. I think that we can make scan_swap_map_try_ssd_cluster() works for order > 0 swap entry allocation. > Are you suggesting you want to allocate large entries (> order-0) from the same > cluster that is used for small (order-0) entries? The problem with this approach > is that there may not be enough space left in the current cluster for the large > entry that you are trying to allocate. Then you would need to take a new cluster > from the free list and presumably leave the previous cluster with unused entries > (which will now only be accessible by scanning). That unused space could be > considerable. > > That's why I am currently reserving a "current cluster" per order; that way, all > allocations are the same order, they are all naturally aligned and there is no > waste. I am fine to use one swap cluster per order per CPU. I just think that we should share code. > Perhaps I could implement "stealing" between cpus so that a cpu trying to > allocate a specific order, but which doesn't have a current cluster for that > order and the free list is empty, could allocate from another cpu's current > cluster? I don't think it's a good idea to mix different orders in the same > cluster though. I think we can start from a simple solution, that is, just fall back to split the large folio. Then, we can optimize it step by step. > I guess if really low, I could remove a current cluster from a cpu and allow it > to be scanned for order-0 allocations at least? Better to have same behavior between order- and order > 0. Perhaps begin with the current solution (allow swap entries in per-CPU cluster to be stolen). Then we can optimize based on this. Not directly related to this patchset. Maybe we can replace swap_slots_cache with per-CPU cluster in the future. This will reduce the code complexity. > Any opinions gratefully received! Thanks! >> >>> { >>> + int order_idx; >>> unsigned long idx; >>> struct swap_cluster_info *ci; >>> + struct percpu_cluster *cluster; >>> unsigned long offset; >>> >>> /* >>> * Should not even be attempting cluster allocations when huge >>> * page swap is disabled. Warn and fail the allocation. >>> */ >>> - if (!IS_ENABLED(CONFIG_THP_SWAP)) { >>> + if (!IS_ENABLED(CONFIG_THP_SWAP) || >>> + nr_pages < 4 || nr_pages > SWAPFILE_CLUSTER || >>> + !is_power_of_2(nr_pages)) { >>> VM_WARN_ON_ONCE(1); >>> return 0; >>> } >>> >>> - if (cluster_list_empty(&si->free_clusters)) >>> + /* >>> + * Not using clusters so unable to allocate large entries. >>> + */ >>> + if (!si->cluster_info) >>> return 0; >>> >>> - idx = cluster_list_first(&si->free_clusters); >>> - offset = idx * SWAPFILE_CLUSTER; >>> - ci = lock_cluster(si, offset); >>> - alloc_cluster(si, idx); >>> - cluster_set_count(ci, SWAPFILE_CLUSTER); >>> + order_idx = ilog2(nr_pages) - 2; >>> + cluster = this_cpu_ptr(si->percpu_cluster); >>> + offset = cluster->large_next[order_idx]; >>> + >>> + if (offset == UINT_MAX) { >>> + if (cluster_list_empty(&si->free_clusters)) >>> + return 0; >>> + >>> + idx = cluster_list_first(&si->free_clusters); >>> + offset = idx * SWAPFILE_CLUSTER; >>> >>> - memset(si->swap_map + offset, SWAP_HAS_CACHE, SWAPFILE_CLUSTER); >>> + ci = lock_cluster(si, offset); >>> + alloc_cluster(si, idx); >>> + cluster_set_count(ci, SWAPFILE_CLUSTER); >>> + >>> + /* >>> + * If scan_swap_map_slots() can't find a free cluster, it will >>> + * check si->swap_map directly. To make sure this standby >>> + * cluster isn't taken by scan_swap_map_slots(), mark the swap >>> + * entries bad (occupied). (same approach as discard). >>> + */ >>> + memset(si->swap_map + offset + nr_pages, SWAP_MAP_BAD, >>> + SWAPFILE_CLUSTER - nr_pages); >> >> There's an issue with this solution. If the free space of swap device >> runs low, it's possible that >> >> - some cluster are put in the percpu_cluster of some CPUs >> the swap entries there are marked as used >> >> - no free swap entries elsewhere >> >> - nr_swap_pages isn't 0 >> >> So, we will still scan LRU, but swap allocation fails, although there's >> still free swap space. > > Ahh yes, good spot. > >> >> I think that we should follow the method we used for the original >> percpu_cluster. That is, if all free swap entries are in >> percpu_cluster, we will start to allocate from percpu_cluster. > > As i suggested above, I think I could do this by removing a cpu's current > cluster for a given order from the percpu_cluster and making it generally > available for scanning. Does that work for you? replied above. >> >>> + } else { >>> + idx = offset / SWAPFILE_CLUSTER; >>> + ci = lock_cluster(si, offset); >>> + } >>> + >>> + memset(si->swap_map + offset, SWAP_HAS_CACHE, nr_pages); >>> unlock_cluster(ci); >>> - swap_range_alloc(si, offset, SWAPFILE_CLUSTER); >>> + swap_range_alloc(si, offset, nr_pages); >>> *slot = swp_entry(si->type, offset); >>> >>> + offset += nr_pages; >>> + if (idx != offset / SWAPFILE_CLUSTER) >>> + offset = UINT_MAX; >>> + cluster->large_next[order_idx] = offset; >>> + >>> return 1; >>> } >>> >> >> [snip] -- Best Regards, Huang, Ying