On Sat 28-11-20 20:36:29, Amy Parker wrote: > DAX uses the DAX_ZERO_PAGE bit to represent holes in files. It could also use > a single entry, such as XArray's XA_ZERO_ENTRY. This distinguishes zero pages > and allows us to shift DAX_EMPTY down (see patch 2/3). > > Signed-off-by: Amy Parker <enbyamy@xxxxxxxxx> Thanks for the patch. The idea looks nice however I think technically there are some problems with the patch. See below. > +/* > + * A zero entry, XA_ZERO_ENTRY, is used to represent a zero page. This > + * definition helps with checking if an entry is a PMD size. > + */ > +#define XA_ZERO_PMD_ENTRY DAX_PMD | (unsigned long)XA_ZERO_ENTRY > + Firstly, if you define a macro, we usually wrap it inside braces like: #define XA_ZERO_PMD_ENTRY (DAX_PMD | (unsigned long)XA_ZERO_ENTRY) to avoid unexpected issues when macro expands and surrounding operators have higher priority. Secondly, I don't think you can combine XA_ZERO_ENTRY with DAX_PMD (or any other bits for that matter). XA_ZERO_ENTRY is defined as xa_mk_internal(257) which is ((257 << 2) | 2) - DAX bits will overlap with the bits xarray internal entries are using and things will break. Honestly, I find it somewhat cumbersome to use xarray internal entries for DAX purposes since all the locking (using DAX_LOCKED) and size checking (using DAX_PMD) functions will have to special-case internal entries to operate on different set of bits. It could be done, sure, but I'm not sure it is worth the trouble for saving two bits (we could get rid of DAX_ZERO_PAGE and DAX_EMPTY bits in this way) in DAX entries. But maybe Matthew had some idea how to do this in an elegant way... Honza > static unsigned long dax_to_pfn(void *entry) > { > return xa_to_value(entry) >> DAX_SHIFT; > @@ -114,7 +119,7 @@ static bool dax_is_pte_entry(void *entry) > > static int dax_is_zero_entry(void *entry) > { > - return xa_to_value(entry) & DAX_ZERO_PAGE; > + return xa_to_value(entry) & (unsigned long)XA_ZERO_ENTRY; > } > > static int dax_is_empty_entry(void *entry) > @@ -738,7 +743,7 @@ static void *dax_insert_entry(struct xa_state *xas, > if (dirty) > __mark_inode_dirty(mapping->host, I_DIRTY_PAGES); > > - if (dax_is_zero_entry(entry) && !(flags & DAX_ZERO_PAGE)) { > + if (dax_is_zero_entry(entry) && !(flags & (unsigned long)XA_ZERO_ENTRY)) { > unsigned long index = xas->xa_index; > /* we are replacing a zero page with block mapping */ > if (dax_is_pmd_entry(entry)) > @@ -1047,7 +1052,7 @@ static vm_fault_t dax_load_hole(struct xa_state *xas, > vm_fault_t ret; > > *entry = dax_insert_entry(xas, mapping, vmf, *entry, pfn, > - DAX_ZERO_PAGE, false); > + XA_ZERO_ENTRY, false); > > ret = vmf_insert_mixed(vmf->vma, vaddr, pfn); > trace_dax_load_hole(inode, vmf, ret); > @@ -1434,7 +1439,7 @@ static vm_fault_t dax_pmd_load_hole(struct > xa_state *xas, struct vm_fault *vmf, > > pfn = page_to_pfn_t(zero_page); > *entry = dax_insert_entry(xas, mapping, vmf, *entry, pfn, > - DAX_PMD | DAX_ZERO_PAGE, false); > + XA_ZERO_PMD_ENTRY, false); > > if (arch_needs_pgtable_deposit()) { > pgtable = pte_alloc_one(vma->vm_mm); > -- > 2.29.2 -- Jan Kara <jack@xxxxxxxx> SUSE Labs, CR