On Wed, Feb 10, 2016 at 11:32:49AM +0100, Jan Kara wrote: > On Tue 09-02-16 10:18:53, Dan Williams wrote: > > On Tue, Feb 9, 2016 at 9:24 AM, Jan Kara <jack@xxxxxxx> wrote: > > > Hello, > > > > > > I was thinking about current issues with DAX fault locking [1] (data > > > corruption due to racing faults allocating blocks) and also races which > > > currently don't allow us to clear dirty tags in the radix tree due to races > > > between faults and cache flushing [2]. Both of these exist because we don't > > > have an equivalent of page lock available for DAX. While we have a > > > reasonable solution available for problem [1], so far I'm not aware of a > > > decent solution for [2]. After briefly discussing the issue with Mel he had > > > a bright idea that we could used hashed locks to deal with [2] (and I think > > > we can solve [1] with them as well). So my proposal looks as follows: > > > > > > DAX will have an array of mutexes (the array can be made per device but > > > initially a global one should be OK). We will use mutexes in the array as a > > > replacement for page lock - we will use hashfn(mapping, index) to get > > > particular mutex protecting our offset in the mapping. On fault / page > > > mkwrite, we'll grab the mutex similarly to page lock and release it once we > > > are done updating page tables. This deals with races in [1]. When flushing > > > caches we grab the mutex before clearing writeable bit in page tables > > > and clearing dirty bit in the radix tree and drop it after we have flushed > > > caches for the pfn. This deals with races in [2]. > > > > > > Thoughts? > > > > > > > I like the fact that this makes the locking explicit and > > straightforward rather than something more tricky. Can we make the > > hashfn pfn based? I'm thinking we could later reuse this as part of > > the solution for eliminating the need to allocate struct page, and we > > don't have the 'mapping' available in all paths... > > So Mel originally suggested to use pfn for hashing as well. My concern with > using pfn is that e.g. if you want to fill a hole, you don't have a pfn to > lock. What you really need to protect is a logical offset in the file to > serialize allocation of underlying blocks, its mapping into page tables, > and flushing the blocks out of caches. So using inode/mapping and offset > for the hashing is easier (it isn't obvious to me we can fix hole filling > races with pfn-based locking). So how does that file+offset hash work when trying to lock different ranges? file+offset hashing to determine the lock to use only works if we are dealing with fixed size ranges that the locks affect. e.g. offset has 4k granularity for a single page faults, but we also need to handle 2MB granularity for huge page faults, and IIRC 1GB granularity for giant page faults... What's the plan here? Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html