Re: [RFC] iomap: use huge zero folio in iomap_dio_zero

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> so unless submit_bio() can handle the fallback to "create a new bio
> full of zeroes and resubmit it to the device" if the original fails,
> we're a little mismatched.  I'm not really familiar with either part of
> this code, so I don't have much in the way of bright ideas.  Perhaps
> we go back to the "allocate a large folio at filesystem mount" plan.

So one thing that became clear after yesterday's discussion was to
**not** use a PMD page for sub block zeroing as in some architectures
we will be using a lot of memory (such as ARM) to zero out a 64k FS block.

So Chinner proposed the idea of using iomap_init function to alloc
large zero folio that could be used in iomap_dio_zero().

The general agreement was 64k large folio is enough for now. We could
always increase it and optimize it in the future when required.

This is a rough prototype of what it might look like:

diff --git a/fs/internal.h b/fs/internal.h
index 7ca738904e34..dad5734b2f75 100644
--- a/fs/internal.h
+++ b/fs/internal.h
@@ -35,6 +35,12 @@ static inline void bdev_cache_init(void)
 int __block_write_begin_int(struct folio *folio, loff_t pos, unsigned len,
                get_block_t *get_block, const struct iomap *iomap);
 
+/*
+ * iomap/buffered-io.c
+ */
+
+extern struct folio *zero_fsb_folio;
+
 /*
  * char_dev.c
  */
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 4e8e41c8b3c0..48235765df7a 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -42,6 +42,7 @@ struct iomap_folio_state {
 };
 
 static struct bio_set iomap_ioend_bioset;
+struct folio *zero_fsb_folio;
 
 static inline bool ifs_is_fully_uptodate(struct folio *folio,
                struct iomap_folio_state *ifs)
@@ -1985,8 +1986,15 @@ iomap_writepages(struct address_space *mapping, struct writeback_control *wbc,
 }
 EXPORT_SYMBOL_GPL(iomap_writepages);
 
+
 static int __init iomap_init(void)
 {
+       void            *addr = kzalloc(16 * PAGE_SIZE, GFP_KERNEL);
+
+       if (!addr)
+               return -ENOMEM;
+
+       zero_fsb_folio = virt_to_folio(addr);
        return bioset_init(&iomap_ioend_bioset, 4 * (PAGE_SIZE / SECTOR_SIZE),
                           offsetof(struct iomap_ioend, io_bio),
                           BIOSET_NEED_BVECS);
diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
index f3b43d223a46..59a65c3ccf13 100644
--- a/fs/iomap/direct-io.c
+++ b/fs/iomap/direct-io.c
@@ -236,17 +236,23 @@ static void iomap_dio_zero(const struct iomap_iter *iter, struct iomap_dio *dio,
                loff_t pos, unsigned len)
 {
        struct inode *inode = file_inode(dio->iocb->ki_filp);
-       struct page *page = ZERO_PAGE(0);
        struct bio *bio;
 
-       bio = iomap_dio_alloc_bio(iter, dio, 1, REQ_OP_WRITE | REQ_SYNC | REQ_IDLE);
+       /*
+        * The zero folio used is 64k.
+        */
+       WARN_ON_ONCE(len > (16 * PAGE_SIZE));
+
+       bio = iomap_dio_alloc_bio(iter, dio, BIO_MAX_VECS,
+                                 REQ_OP_WRITE | REQ_SYNC | REQ_IDLE);
        fscrypt_set_bio_crypt_ctx(bio, inode, pos >> inode->i_blkbits,
                                  GFP_KERNEL);
+
        bio->bi_iter.bi_sector = iomap_sector(&iter->iomap, pos);
        bio->bi_private = dio;
        bio->bi_end_io = iomap_dio_bio_end_io;
 
-       __bio_add_page(bio, page, len, 0);
+       bio_add_folio_nofail(bio, zero_fsb_folio, len, 0);
        iomap_dio_submit_bio(iter, dio, bio, pos);
 }





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux