On Wed, Dec 13, 2023 at 5:20 AM Ronald Monthero <debug.penguin32@xxxxxxxxx> wrote: > > Hi Nhat, > Thanks for checking. > On Tue, Dec 12, 2023 at 12:16 AM Nhat Pham <nphamcs@xxxxxxxxx> wrote: > > > > On Sun, Dec 10, 2023 at 9:31 PM Ronald Monthero > > <debug.penguin32@xxxxxxxxx> wrote: > > > > > > Use alloc_workqueue() to create and set finer > > > work item attributes instead of create_workqueue() > > > which is to be deprecated. > > > > > > Signed-off-by: Ronald Monthero <debug.penguin32@xxxxxxxxx> > > > --- > > > mm/zswap.c | 3 ++- > > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > > > diff --git a/mm/zswap.c b/mm/zswap.c > > > index 74411dfdad92..64dbe3e944a2 100644 > > > --- a/mm/zswap.c > > > +++ b/mm/zswap.c > > > @@ -1620,7 +1620,8 @@ static int zswap_setup(void) > > > zswap_enabled = false; > > > } > > > > > > - shrink_wq = create_workqueue("zswap-shrink"); > > > + shrink_wq = alloc_workqueue("zswap-shrink", > > > + WQ_UNBOUND|WQ_MEM_RECLAIM, 0); > > > > Hmmm this changes the current behavior a bit right? create_workqueue() > > is currently defined as: > > > > alloc_workqueue("%s", __WQ_LEGACY | WQ_MEM_RECLAIM, 1, (name)) > > create_workqueue is deprecated and it's observed that most of the > subsystems have changed to using alloc_workqueue. So it's a small > minority of few remnant instances in the kernel and some drivers still > using create_workqueue. With the create_workqueue defined as is , it > hardcodes the workqueue flags to be per cpu and unbound in nature and > not giving the flexibility of using any other wait queue > flags/attributes. ( WQ_CPU_INTENSIVE, WQ_HIGHPRI, WQ_RESCUER, > WQ_FREEZEABLE, WQ_UNBOUND) . Hence most of the subsystems and drivers > use the alloc_workqueue( ) api. > #define create_workqueue(name) \ > alloc_workqueue("%s", __WQ_LEGACY | WQ_MEM_RECLAIM, 1, (name)) > > > I think this should be noted in the changelog, at the very least, even > > if it is fine. We should be as explicit as possible about behavior > > changes. > > > imo, it's sort of known and consistently changed for quite some time already. > https://lists.openwall.net/linux-kernel/2016/06/07/1086 > https://lists.openwall.net/linux-kernel/2011/01/03/124 > https://lwn.net/Articles/403891/ => quoted: "The long-term plan, it > seems, is to convert all create_workqueue() users over to an > appropriate alloc_workqueue() call; eventually create_workqueue() will > be removed" > > glad to take some suggestions , thoughts ? > > BR, > ronald I should have been clearer. I'm not against the change per-se - I agree that we should replace create_workqueue() with alloc_workqueue(). What I meant was, IIUC, there are two behavioral changes with this new workqueue creation: a) We're replacing a bounded workqueue (which as you noted, is fixed by create_workqueue()) with an unbounded one (WQ_UNBOUND). This seems fine to me - I doubt locality buys us much here. b) create_workqueue() limits the number of concurrent per-cpu execution contexts at 1 (i.e only one single global reclaimer), whereas after this patch this is set to the default value. This seems fine to me too - I don't remember us taking advantage of the previous concurrency limitation. Also, in practice, the task_struct is one-to-one with the zswap_pool's anyway, and most of the time, there is just a single pool being used. (But it begs the question - what's the point of using 0 instead of 1 here?) Both seem fine (to me anyway - other reviewers feel free to take a look and fact-check everything). I just feel like this should be explicitly noted in the changelog, IMHO, in case we are mistaken and need to revisit this :) Either way, not a NACK from me.