On Wed, Aug 21, 2019 at 04:24:40PM -0700, Christoph Hellwig wrote: > > + > > +/* > > + * __vmalloc() will allocate data pages and auxillary structures (e.g. > > + * pagetables) with GFP_KERNEL, yet we may be under GFP_NOFS context here. Hence > > + * we need to tell memory reclaim that we are in such a context via > > + * PF_MEMALLOC_NOFS to prevent memory reclaim re-entering the filesystem here > > + * and potentially deadlocking. > > + */ > > Btw, I think we should eventually kill off KM_NOFS and just use > PF_MEMALLOC_NOFS in XFS, as the interface makes so much more sense. > But that's something for the future. Yeah, and it's not quite as simple as just using PF_MEMALLOC_NOFS at high levels - we'll still need to annotate callers that use KM_NOFS to avoid lockdep false positives. i.e. any code that can be called from GFP_KERNEL and reclaim context will throw false positives from lockdep if we don't annotate tehm correctly.... > > +/* > > + * Same as kmem_alloc_large, except we guarantee a 512 byte aligned buffer is > > + * returned. vmalloc always returns an aligned region. > > + */ > > +void * > > +kmem_alloc_io(size_t size, xfs_km_flags_t flags) > > +{ > > + void *ptr; > > + > > + trace_kmem_alloc_io(size, flags, _RET_IP_); > > + > > + ptr = kmem_alloc(size, flags | KM_MAYFAIL); > > + if (ptr) { > > + if (!((long)ptr & 511)) > > Please use unsigned long (or uintptr_t if you want to be fancy), > and (SECTOR_SIZE - 1). Already changed it to uintptr_t when I did... > > As said elsewhere if we want to be fancy we should probably pass a > request queue or something pointing to it. .... this. Well, not exactly this - I pass in the alignment required as an int, and the callers get it from the request queue.... > But then again I don't think > it really matters much, it would save us the reallocation with slub debug > for a bunch of scsi adapters that support dword aligned I/O. But last > least the interface would be a little more obvious. Yup, just smoke testing it now before I resend. Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx