On Fri, May 22, 2020 at 01:43:08PM -0700, Darrick J. Wong wrote: > On Fri, May 22, 2020 at 10:30:27AM +1000, Dave Chinner wrote: > > On Thu, May 21, 2020 at 04:13:12PM -0700, Darrick J. Wong wrote: > > > [cc linux-xfs] > > > > > > On Fri, May 22, 2020 at 08:21:50AM +1000, Dave Airlie wrote: > > > > Hi, > > > > > > > > Just updated a rawhide VM to the Fedora 5.7.0-rc5 kernel, did some > > > > package building, > > > > > > > > got the below trace, not sure if it's known and fixed or unknown. > > > > > > It's a known false-positive. An inode can't simultaneously be getting > > > reclaimed due to zero refcount /and/ be the target of a getxattr call. > > > Unfortunately, lockdep can't tell the difference, and it seems a little > > > strange to set NOFS on the allocation (which increases the chances of a > > > runtime error) just to quiet that down. > > > > __GFP_NOLOCKDEP is the intended flag to telling memory allocation > > that lockdep is stupid. > > > > However, it seems that the patches that were in progress some months > > ago to convert XFS to kmalloc interfaces and using GFP flags > > directly stalled - being able to mark locations like this with > > __GFP_NOLOCKDEP was one of the main reasons for getting rid of all > > the internal XFS memory allocation wrappers... > > Question is, should I spend time adding a GFP_NOLOCKDEP bandaid to XFS > or would my time be better spent reviewing your async inode reclaim > series to make this go away for real? Heh. I started to write that async reclaim would make this go away, but then I realised it won't because we still do an XFS_ILOCK_EXCL call in xfs_inode_reclaim() right at the end to synchronise with anything that was blocked in the ILOCK during a lockless lookup waiting for reclaim to drop the lock after setting ip->i_ino = 0. So that patchset doesn't make the lockdep issues go away. I still need to work out if we can get rid of that ILOCK cycling in xfs_reclaim_inode() by changing the lockless lookup code, but that's a separate problem... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx