On Wed, Feb 21, 2018 at 08:39:16AM -0500, Brian Foster wrote: > On Tue, Feb 20, 2018 at 09:08:30AM -0800, Darrick J. Wong wrote: > > On Mon, Feb 19, 2018 at 08:54:57AM -0500, Brian Foster wrote: > > > On Fri, Feb 16, 2018 at 09:57:01AM -0800, Darrick J. Wong wrote: > > > I guess I haven't really noticed it enough to consider it an ongoing > > > issue (which doesn't mean it isn't :P). Any pattern to the use cache > > > behind these continued reports..? > > > > No particular pattern, other than using/freeing agfl blocks. > > > > I'm more curious about the use case than the workload. E.g., what's the > purpose for using two kernels? Was it bad luck on an upgrade or > something that implies the problematic state could reoccur (i.e., active > switching between good/bad kernels)? It's the absurb cloud image creation/deployment hoops that seem popular these days, where an image is created on one machine, it's mounted, modified and made ready for production on another, and then deployed to a third machine. We've seen this quite a few times now where the original image and deployment targets are running the same kernel, but the machine that does the "make ready for production" step runs a different kernel. I've seen several setups where RHEL7 kernels were used for steps 1&3, and a near TOT stable kernel was running step 2.... That's the problem case we have to care about here. People are moving filesystem images from machine to machine, backwards and forwards, because they don't have any idea that different machines in the cloud workflow are running different kernels. Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx -- To unsubscribe from this list: send the line "unsubscribe linux-xfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html