On Thu, May 20, 2021 at 07:23:35AM +0200, Christoph Hellwig wrote: > On Thu, May 20, 2021 at 08:40:28AM +1000, Dave Chinner wrote: > > This will not apply (and break) the bulk alloc patch I sent out - we > > have to ensure that the b_pages array is always zeroed before we > > call the bulk alloc function, hence I moved the memset() in this > > function to be unconditional. I almost cleaned up this function in > > that patchset.... > > The buffer is freshly allocated here using kmem_cache_zalloc, so > b_pages can't be set, b_page_array is already zeroed from > kmem_cache_zalloc, and the separate b_pages allocation is swithced > to use kmem_zalloc. I thought the commit log covers this, but maybe > I need to improve it? I think I'm still living in the past a bit, where the page array in an active uncached buffer could change via the old "associate memory" interface. We still actually have that interface in userspace, but we don't have anything in the kernel that uses it any more. Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx