Re: [PATCH v8 06/10] iomap: fix iomap_dio_zero() for fs bs > system page size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > +fs_initcall(iomap_pagecache_init);
> 
> s/iomap_pagecache_init/iomap_buffered_init/
> 
> We don't use pagecache naming anywhere else in the file.

Got it.
> 
> > +/*
> > + * Used for sub block zeroing in iomap_dio_zero()
> > + */
> > +#define ZERO_PAGE_64K_SIZE (65536)
> 
> just use SZ_64K
> 
> > +#define ZERO_PAGE_64K_ORDER (get_order(ZERO_PAGE_64K_SIZE))
> 
> No really point in having this.

Hmm, I used it twice, hence the define. But if we decide to get rid of
set_memory_ro(), then this does not make sense.

> 
> > +static struct page *zero_page_64k;
> 
> This should be a folio.  Encoding the size in the name is also really
> weird and just creates churn when we have to increase it.

Willy suggested we could use raw pages as we don't need the metadata
from using a folio. [0]

> 
> 
> > +	/*
> > +	 * Max block size supported is 64k
> > +	 */
> > +	WARN_ON_ONCE(len > ZERO_PAGE_64K_SIZE);
> 
> 
> A WARN_ON without actually erroring out here is highly dangerous. 

I agree but I think we decided that we are safe with 64k for now as fs 
that uses iomap will not have a block size > 64k. 

But this function needs some changes when we decide to go beyond 64k
by returning error instead of not returning anything. 
Until then WARN_ON_ONCE would be a good stop gap for people developing
the feature to go beyond 64k block size[1]. 

> 
> > +
> >  	bio = iomap_dio_alloc_bio(iter, dio, 1, REQ_OP_WRITE | REQ_SYNC | REQ_IDLE);
> 
> Overly long line here.
> 

Not a part of my change, so I didn't bother reformatting it. :)

> > +
> > +static int __init iomap_dio_init(void)
> > +{
> > +	zero_page_64k = alloc_pages(GFP_KERNEL | __GFP_ZERO,
> > +				    ZERO_PAGE_64K_ORDER);
> 
> > +
> > +	if (!zero_page_64k)
> > +		return -ENOMEM;
> > +
> > +	set_memory_ro((unsigned long)page_address(zero_page_64k),
> > +		      1U << ZERO_PAGE_64K_ORDER);
> 
> What's the point of the set_memory_ro here?  Yes, we won't write to
> it, but it's hardly an attack vector and fragments the direct map.

That is a good point. Darrick suggested why not add a ro tag as we don't
write to it but I did not know the consequence of direct map
fragmentation when this is added. So probably there is no value calling
set_memory_ro here.


--
Pankaj

[0] https://lore.kernel.org/linux-fsdevel/ZkT46AsZ3WghOArL@xxxxxxxxxxxxxxxxxxxx/
[1] I spent a lot of time banging my head why I was getting FS corruption
when I was doing direct io in XFS while adding LBS support before I found
the PAGE_SIZE assumption here. 




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux