RE: f2fs compressed file bio merge problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: Eric Biggers <ebiggers@xxxxxxxxxx>
> Sent: Tuesday, April 19, 2022 1:30 AM
> To: 常凤楠 <changfengnan@xxxxxxxx>
> Cc: jaegeuk@xxxxxxxxxx; chao@xxxxxxxxxx; axboe@xxxxxxxxx;
> linux-f2fs-devel@xxxxxxxxxxxxxxxxxxxxx; linux-fscrypt@xxxxxxxxxxxxxxx
> Subject: Re: f2fs compressed file bio merge problem
> 
> [+Cc linux-fscrypt]
> 
> On Mon, Apr 18, 2022 at 08:15:47AM +0000, 常凤楠 wrote:
> > Hi:
> > 	When I test seq-write on f2fs compressed file, I found it may have
> significant performance degradation when mount with inlinecrypt. Here is my
> analysis:
> > 	f2fs write compressed file in unit of cluster, after compressed, one cluster
> up to have three valid pages to write. So between multi clusters, the page
> index couldn't be contiguous. For example, It may like this: Cluster 0 write
> page 0 and 1, Cluster 1 write page 4 and 5.
> > 	In f2fs_crypt_mergeable_bio, fscrypt_mergeable_bio will check weather
> file logical block number is contiguous, result in multi clusters cannot be
> merge into one bio.
> > 	In my test, inlinecrypt mount option may cause seq-write performance to
> drop by half.
> > 	The attachment is my fio test configure file.
> > 	This is a tricky problem for me. Is there any solution for this problem?
> 
> Thanks for clarifying that you are using f2fs compression; in your previous
> message you didn't mention this
> (https://lore.kernel.org/all/KL1PR0601MB4003998B841513BCAA386ADEBBEE
> 9@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/T/#u).
> 
> Unfortunately, I don't believe there is any practical way that we could do the
> encryption differently that would result in this no longer being a problem.

For now I agree with you. Maybe someone else have better idea?

> 
> This is because for adjacent clusters to have contiguous DUNs, the DUNs
> would have to increment according to the compressed size, not the
> uncompressed size.
> However, in this case it wouldn't be possible to support random-access writes,
> since any write would require re-writing the entire file.
> 
> This could be provided as an option for read-only filesystems, I suppose.  But
> I doubt that that is your use case.
Yes, maybe we can fix this for read-only filesystems first. This problem exist in seq-read too.

> 
> - Eric




[Index of Archives]     [linux Cryptography]     [Asterisk App Development]     [PJ SIP]     [Gnu Gatekeeper]     [IETF Sipping]     [Info Cyrus]     [ALSA User]     [Fedora Linux Users]     [Linux SCTP]     [DCCP]     [Gimp]     [Yosemite News]     [Deep Creek Hot Springs]     [Yosemite Campsites]     [ISDN Cause Codes]

  Powered by Linux