Re: [PATCH 4/4] xfs: reactivate XFS_NEED_INACTIVE inodes from xfs_iget

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 21, 09:51, Dave Chinner wrote
> I just haven't thought to run sparse on XFS recently - running
> sparse on a full kernel build is just .... awful. I think I'll
> change my build script so that when I do an '--xfs-only' built it
> also enables sparse as it's only rebuilding fs/xfs at that point....

Would it be less awful to run coccinelle with a selected set of
semantic patches that catch defective patterns such as double
unlock/free?

> > > (Doesn't simple lock debugging catch these sorts of things?)
> > 
> > Maybe this error path doesn't get exercised because xfs_reinit_inode()
> > never fails. AFAICT, it can only fail if security_inode_alloc()
> > can't allocate the composite inode blob.
> 
> Which syzkaller triggers every so often. I also do all my testing
> with selinux enabled, so security_inode_alloc() is actually being
> exercised and definitely has the potential to fail on my small
> memory configs...

One could try to trigger ENOMEM more easily in functions like this
by allocating bigger slab caches for debug builds.

> > > ((It sure would be nice if locking returned a droppable "object" to do
> > > the unlock ala Rust and then spin_lock could be __must_check.))
> > 
> > There's the *LOCK_GUARD* macros which employ gcc's cleanup attribute
> > to automatically call e.g. spin_unlock() when a variable goes out of
> > scope (see 54da6a0924311).
> 
> IMO, the LOCK_GUARD stuff is an awful anti-pattern. It means some
> error paths -look broken- because they lack unlocks, and we have to
> explicitly change code to return from functions with the guarded
> locks held. This is a diametrically opposed locking pattern to the
> existing non-guarded lockign patterns - correct behaviour in one
> pattern is broken behaviour in the other, and vice versa.
> 
> That's just -insane- from a code maintenance point of view.

Converting all locks in fs/xfs in one go is not an option either, as
this would be too big to review, and non-trivial to begin with. There
are 180+ calls to spin_lock(), and that's just the spinlocks. Also
these patches would interfere badly with ongoing work.

> And they are completely useless for anythign complex like these
> XFS icache functions because the lock scope is not balanced across
> functions.
>
> The lock can also be taken by functions called within the guard
> scope, and so using guarded lock scoping would result in deadlocks.
> i.e. xfs_inodegc_queue() needs to take the i_flags_lock, so it must
> be dropped before we call that.

Yup, these can't use the LOCK_GUARD macros, which leads to an unholy
mix of guarded and unguarded locks.

> So, yeah, lock guards seem to me to be largely just a "look ma, no
> need for rust because we can mightily abuse the C preprocessor!"
> anti-pattern looking for a problem to solve.

Do you think there is a valid use case for the cleanup attribute,
or do you believe that the whole concept is mis-designed?

Thanks for sharing your opinions.
Andre
-- 
Max Planck Institute for Biology
Tel: (+49) 7071 601 829
Max-Planck-Ring 5, 72076 Tübingen, Germany
http://people.tuebingen.mpg.de/maan/

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux