Re: [PATCH 12/13] fsverity: remove system-wide workqueue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 04, 2024 at 11:14:07PM -0400, Eric Biggers wrote:
> On Fri, Mar 29, 2024 at 05:35:48PM -0700, Darrick J. Wong wrote:
> > From: Darrick J. Wong <djwong@xxxxxxxxxx>
> > 
> > Now that we've made the verity workqueue per-superblock, we don't need
> > the systemwide workqueue.  Get rid of the old implementation.
> 
> This commit message needs to be rephrased because this commit isn't just
> removing unused code.  It's also converting ext4 and f2fs over to the new
> workqueue type.  (Maybe these two parts belong as separate patches?)

Yes, will fix that.

> Also, if there are any changes in the workqueue flags that are being used for
> ext4 and f2fs, that needs to be documented.

Hmm.  The current codebase does this:

	fsverity_read_workqueue = alloc_workqueue("fsverity_read_queue",
						  WQ_HIGHPRI,
						  num_online_cpus());

Looking at commit f959325e6ac3 ("fsverity: Remove WQ_UNBOUND from
fsverity read workqueue"), I guess you want a bound workqueue so that
the CPU that handles the readahead ioend will also handle the verity
validation?

Why do you set max_active to num_online_cpus()?  Is that because the
verity hash is (probably?) being computed on the CPUs, and there's only
so many of those to go around, so there's little point in making more?
Or is it to handle systems with more than WQ_DFL_ACTIVE (~256) CPUs?
Maybe there's a different reason?

If you add more CPUs to the system later, does this now constrain the
number of CPUs that can be participating in verity validation?  Why not
let the system try to process as many read ioends as are ready to be
processed, rather than introducing a constraint here?

As for WQ_HIGHPRI, I wish Dave or Andrey would chime in on why this
isn't appropriate for XFS.  I think they have a reason for this, but the
most I can do is speculate that it's to avoid blocking other things in
the system.

In Andrey's V5 patch, XFS creates its own the workqueue like this:
https://lore.kernel.org/linux-xfs/20240304191046.157464-10-aalbersh@xxxxxxxxxx/

	struct workqueue_struct *wq = alloc_workqueue(
		"pread/%s", (WQ_FREEZABLE | WQ_MEM_RECLAIM), 0, sb->s_id);

I don't grok this either -- read ioend workqueues aren't usually
involved in memory reclaim at all, and I can't see why you'd want to
freeze the verity workqueue during suspend.  Reads are allowed on frozen
filesystems, so I don't see why verity would be any different.

In the end, I think I might change this to:

int __fsverity_init_verify_wq(struct super_block *sb,
			      unsigned int wq_flags,
			      int max_active)
{
	wq = alloc_workqueue("fsverity/%s", wq_flags, max_active, sb->s_id);
	...
}

So that XFS can have what I think makes sense (wq_flags == 0,
max_active == 0), and ext4/f2fs can continue the same way they do now.

--D

> - Eric
> 




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux