Re: [RFC] iomap: use huge zero folio in iomap_dio_zero

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/16/24 17:02, Pankaj Raghav (Samsung) wrote:
On Wed, May 15, 2024 at 07:03:20PM +0100, Matthew Wilcox wrote:
On Wed, May 15, 2024 at 03:59:43PM +0000, Pankaj Raghav (Samsung) wrote:
  static int __init iomap_init(void)
  {
+       void            *addr = kzalloc(16 * PAGE_SIZE, GFP_KERNEL);

Don't use XFS coding style outside XFS.

kzalloc() does not guarantee page alignment much less alignment to
a folio.  It happens to work today, but that is an implementation
artefact.

+
+       if (!addr)
+               return -ENOMEM;
+
+       zero_fsb_folio = virt_to_folio(addr);

We also don't guarantee that calling kzalloc() gives you a virtual
address that can be converted to a folio.  You need to allocate a folio
to be sure that you get a folio.

Of course, you don't actually need a folio.  You don't need any of the
folio metadata and can just use raw pages.

+       /*
+        * The zero folio used is 64k.
+        */
+       WARN_ON_ONCE(len > (16 * PAGE_SIZE));

PAGE_SIZE is not necessarily 4KiB.

+       bio = iomap_dio_alloc_bio(iter, dio, BIO_MAX_VECS,
+                                 REQ_OP_WRITE | REQ_SYNC | REQ_IDLE);

The point was that we now only need one biovec, not MAX.


Thanks for the comments. I think it all makes sense:

diff --git a/fs/internal.h b/fs/internal.h
index 7ca738904e34..e152b77a77e4 100644
--- a/fs/internal.h
+++ b/fs/internal.h
@@ -35,6 +35,14 @@ static inline void bdev_cache_init(void)
  int __block_write_begin_int(struct folio *folio, loff_t pos, unsigned len,
                 get_block_t *get_block, const struct iomap *iomap);
+/*
+ * iomap/buffered-io.c
+ */
+
+#define ZERO_FSB_SIZE (65536)
+#define ZERO_FSB_ORDER (get_order(ZERO_FSB_SIZE))
+extern struct page *zero_fs_block;
+
  /*
   * char_dev.c
   */
But why?
We already have a perfectly fine hugepage zero page in huge_memory.c. Shouldn't we rather export that one and use it?
(Actually I have some patches for doing so...)
We might allocate folios
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 4e8e41c8b3c0..36d2f7edd310 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -42,6 +42,7 @@ struct iomap_folio_state {
  };
static struct bio_set iomap_ioend_bioset;
+struct page *zero_fs_block;
static inline bool ifs_is_fully_uptodate(struct folio *folio,
                 struct iomap_folio_state *ifs)
@@ -1985,8 +1986,13 @@ iomap_writepages(struct address_space *mapping, struct writeback_control *wbc,
  }
  EXPORT_SYMBOL_GPL(iomap_writepages);
+
  static int __init iomap_init(void)
  {
+       zero_fs_block = alloc_pages(GFP_KERNEL | __GFP_ZERO, ZERO_FSB_ORDER);
+       if (!zero_fs_block)
+               return -ENOMEM;
+
         return bioset_init(&iomap_ioend_bioset, 4 * (PAGE_SIZE / SECTOR_SIZE),
                            offsetof(struct iomap_ioend, io_bio),
                            BIOSET_NEED_BVECS);
diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
index f3b43d223a46..50c2bca8a347 100644
--- a/fs/iomap/direct-io.c
+++ b/fs/iomap/direct-io.c
@@ -236,17 +236,22 @@ static void iomap_dio_zero(const struct iomap_iter *iter, struct iomap_dio *dio,
                 loff_t pos, unsigned len)
  {
         struct inode *inode = file_inode(dio->iocb->ki_filp);
-       struct page *page = ZERO_PAGE(0);
         struct bio *bio;
+ /*
+        * Max block size supported is 64k
+        */
+       WARN_ON_ONCE(len > ZERO_FSB_SIZE);
+
         bio = iomap_dio_alloc_bio(iter, dio, 1, REQ_OP_WRITE | REQ_SYNC | REQ_IDLE);
         fscrypt_set_bio_crypt_ctx(bio, inode, pos >> inode->i_blkbits,
                                   GFP_KERNEL);
+
         bio->bi_iter.bi_sector = iomap_sector(&iter->iomap, pos);
         bio->bi_private = dio;
         bio->bi_end_io = iomap_dio_bio_end_io;
- __bio_add_page(bio, page, len, 0);
+       __bio_add_page(bio, zero_fs_block, len, 0);
         iomap_dio_submit_bio(iter, dio, bio, pos);
  }


--
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@xxxxxxx                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux