On 6/22/21 6:45 PM, Hannes Reinecke wrote: > On 6/15/21 7:49 AM, Coly Li wrote: >> From: Jianpeng Ma <jianpeng.ma@xxxxxxxxx> >> >> This nvm pages allocator will implement the simple buddy to manage the >> nvm address space. This patch initializes this buddy for new namespace. >> > Please use 'buddy allocator' instead of just 'buddy'. Will update in next post. > >> the unit of alloc/free of the buddy is page. DAX device has their >> struct page(in dram or PMEM). >> >> struct { /* ZONE_DEVICE pages */ >> /** @pgmap: Points to the hosting device page map. */ >> struct dev_pagemap *pgmap; >> void *zone_device_data; >> /* >> * ZONE_DEVICE private pages are counted as being >> * mapped so the next 3 words hold the mapping, index, >> * and private fields from the source anonymous or >> * page cache page while the page is migrated to device >> * private memory. >> * ZONE_DEVICE MEMORY_DEVICE_FS_DAX pages also >> * use the mapping, index, and private fields when >> * pmem backed DAX files are mapped. >> */ >> }; >> >> ZONE_DEVICE pages only use pgmap. Other 4 words[16/32 bytes] don't use. >> So the second/third word will be used as 'struct list_head ' which list >> in buddy. The fourth word(that is normal struct page::index) store pgoff >> which the page-offset in the dax device. And the fifth word (that is >> normal struct page::private) store order of buddy. page_type will be used >> to store buddy flags. >> >> Reported-by: kernel test robot <lkp@xxxxxxxxx> >> Reported-by: Dan Carpenter <dan.carpenter@xxxxxxxxxx> >> Signed-off-by: Jianpeng Ma <jianpeng.ma@xxxxxxxxx> >> Co-developed-by: Qiaowei Ren <qiaowei.ren@xxxxxxxxx> >> Signed-off-by: Qiaowei Ren <qiaowei.ren@xxxxxxxxx> >> Signed-off-by: Coly Li <colyli@xxxxxxx> >> --- >> drivers/md/bcache/nvm-pages.c | 156 +++++++++++++++++++++++++++++++- >> drivers/md/bcache/nvm-pages.h | 6 ++ >> include/uapi/linux/bcache-nvm.h | 10 +- >> 3 files changed, 165 insertions(+), 7 deletions(-) >> >> diff --git a/drivers/md/bcache/nvm-pages.c b/drivers/md/bcache/nvm-pages.c >> index 18fdadbc502f..804ee66e97be 100644 >> --- a/drivers/md/bcache/nvm-pages.c >> +++ b/drivers/md/bcache/nvm-pages.c >> @@ -34,6 +34,10 @@ static void release_nvm_namespaces(struct bch_nvm_set *nvm_set) >> for (i = 0; i < nvm_set->total_namespaces_nr; i++) { >> ns = nvm_set->nss[i]; >> if (ns) { >> + kvfree(ns->pages_bitmap); >> + if (ns->pgalloc_recs_bitmap) >> + bitmap_free(ns->pgalloc_recs_bitmap); >> + >> blkdev_put(ns->bdev, FMODE_READ|FMODE_WRITE|FMODE_EXEC); >> kfree(ns); >> } >> @@ -48,17 +52,130 @@ static void release_nvm_set(struct bch_nvm_set *nvm_set) >> kfree(nvm_set); >> } >> >> +static struct page *nvm_vaddr_to_page(struct bch_nvm_namespace *ns, void *addr) >> +{ >> + return virt_to_page(addr); >> +} >> + >> +static void *nvm_pgoff_to_vaddr(struct bch_nvm_namespace *ns, pgoff_t pgoff) >> +{ >> + return ns->kaddr + (pgoff << PAGE_SHIFT); >> +} >> + >> +static inline void remove_owner_space(struct bch_nvm_namespace *ns, >> + pgoff_t pgoff, u64 nr) >> +{ >> + while (nr > 0) { >> + unsigned int num = nr > UINT_MAX ? UINT_MAX : nr; >> + >> + bitmap_set(ns->pages_bitmap, pgoff, num); >> + nr -= num; >> + pgoff += num; >> + } >> +} >> + >> +#define BCH_PGOFF_TO_KVADDR(pgoff) ((void *)((unsigned long)pgoff << PAGE_SHIFT)) >> + >> static int init_owner_info(struct bch_nvm_namespace *ns) >> { >> struct bch_owner_list_head *owner_list_head = ns->sb->owner_list_head; >> + struct bch_nvm_pgalloc_recs *sys_recs; >> + int i, j, k, rc = 0; >> >> mutex_lock(&only_set->lock); >> only_set->owner_list_head = owner_list_head; >> only_set->owner_list_size = owner_list_head->size; >> only_set->owner_list_used = owner_list_head->used; >> + >> + /* remove used space */ >> + remove_owner_space(ns, 0, div_u64(ns->pages_offset, ns->page_size)); >> + >> + sys_recs = ns->kaddr + BCH_NVM_PAGES_SYS_RECS_HEAD_OFFSET; >> + /* suppose no hole in array */ >> + for (i = 0; i < owner_list_head->used; i++) { >> + struct bch_nvm_pages_owner_head *head = &owner_list_head->heads[i]; >> + >> + for (j = 0; j < BCH_NVM_PAGES_NAMESPACES_MAX; j++) { >> + struct bch_nvm_pgalloc_recs *pgalloc_recs = head->recs[j]; >> + unsigned long offset = (unsigned long)ns->kaddr >> PAGE_SHIFT; >> + struct page *page; >> + >> + while (pgalloc_recs) { >> + u32 pgalloc_recs_pos = (unsigned int)(pgalloc_recs - sys_recs); >> + >> + if (memcmp(pgalloc_recs->magic, bch_nvm_pages_pgalloc_magic, 16)) { >> + pr_info("invalid bch_nvm_pages_pgalloc_magic\n"); >> + rc = -EINVAL; >> + goto unlock; >> + } >> + if (memcmp(pgalloc_recs->owner_uuid, head->uuid, 16)) { >> + pr_info("invalid owner_uuid in bch_nvm_pgalloc_recs\n"); >> + rc = -EINVAL; >> + goto unlock; >> + } >> + if (pgalloc_recs->owner != head) { >> + pr_info("invalid owner in bch_nvm_pgalloc_recs\n"); >> + rc = -EINVAL; >> + goto unlock; >> + } >> + >> + /* recs array can has hole */ > can have holes ? It means the valid record is not always continuously stored in recs[] from struct bch_nvm_pgalloc_recs. Because currently only 8 bytes write to a 8 bytes aligned address on NVDIMM is stomic for power failure. When a record is removed from the recs[] array by a block of NVDIMM pages are freed, if the following valid records are moved forward to make all records stored continuously, such memory movement is not atomic for power failure. Then we need to design more complicated method to make sure the meta data consistency for power failure. Allowing hole (records can be non-continuously stored in recs[] array) can make things much simpler here. Thanks for your review. Coly Li