Re: [PATCH] ceph: make the ceph-cap workqueue UNBOUND

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, May 20, 2024 at 9:37 PM Ilya Dryomov <idryomov@xxxxxxxxx> wrote:
>
> On Thu, Mar 21, 2024 at 3:18 AM <xiubli@xxxxxxxxxx> wrote:
> >
> > From: Xiubo Li <xiubli@xxxxxxxxxx>
> >
> > There is not harm to mark the ceph-cap workqueue unbounded, just
> > like we do in ceph-inode workqueue.
> >
> > URL: https://www.spinics.net/lists/ceph-users/msg78775.html
> > URL: https://tracker.ceph.com/issues/64977
> > Reported-by: Stefan Kooman <stefan@xxxxxx>
> > Signed-off-by: Xiubo Li <xiubli@xxxxxxxxxx>
> > ---
> >  fs/ceph/super.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/fs/ceph/super.c b/fs/ceph/super.c
> > index 4dcbbaa297f6..0bfe4f8418fd 100644
> > --- a/fs/ceph/super.c
> > +++ b/fs/ceph/super.c
> > @@ -851,7 +851,7 @@ static struct ceph_fs_client *create_fs_client(struct ceph_mount_options *fsopt,
> >         fsc->inode_wq = alloc_workqueue("ceph-inode", WQ_UNBOUND, 0);
> >         if (!fsc->inode_wq)
> >                 goto fail_client;
> > -       fsc->cap_wq = alloc_workqueue("ceph-cap", 0, 1);
> > +       fsc->cap_wq = alloc_workqueue("ceph-cap", WQ_UNBOUND, 1);
>
> Hi Xiubo,
>
> You wrote that there is no harm in making ceph-cap workqueue unbound,
> but, if it's made unbound, it would be almost the same as ceph-inode
> workqueue.  The only difference would be that max_active parameter for
> ceph-cap workqueue is 1 instead of 0 (i.e. some default which is pretty
> high).  Given that max_active is interpreted as a per-CPU number even
> for unbound workqueues, up to $NUM_CPUS work items submitted to
> ceph-cap workqueue could still be active in a system.
>
> Does CephFS need/rely on $NUM_CPUS limit sowewhere?  If not, how about
> removing ceph-cap workqueue and submitting its work items to ceph-inode
> workqueue instead?

Related question: why ceph_force_reconnect() flushes only one of these
workqueues (ceph-inode) instead of both?  When invalidating everything,
aren't we concerned with potential stale work items from before the
session is recovered?

Thanks,

                Ilya





[Index of Archives]     [CEPH Users]     [Ceph Large]     [Ceph Dev]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux