Re: Question on slow fallocate

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dave Chinner <david@xxxxxxxxxxxxx> writes:

> On Thu, Jun 22, 2023 at 02:34:18PM +0900, Masahiko Sawada wrote:
>> Hi all,
>> 
>> When testing PostgreSQL, I found a performance degradation. After some
>> investigation, it ultimately reached the attached simple C program and
>> turned out that the performance degradation happens on only the xfs
>> filesystem (doesn't happen on neither ext3 nor ext4). In short, the
>> program alternately does two things to extend a file (1) call
>> posix_fallocate() to extend by 8192 bytes
>
> This is a well known anti-pattern - it always causes problems. Do
> not do this.
>
>> and (2) call pwrite() to
>> extend by 8192 bytes. If I do only either (1) or (2), the program is
>> completed in 2 sec, but if I do (1) and (2) alternatively, it is
>> completed in 90 sec.
>
> Well, yes. Using fallocate to extend the file has very different
> constraints to using pwrite to extend the file.
>
>> $ gcc -o test test.c
>> $ time ./test test.1 1
>> total   200000
>> fallocate       200000
>> filewrite       0
>
> No data is written here, so this is just a series of 8kB allocations
> and file size extension operations. There are no constraints here
> because it is a pure metadata operation.
>
>> real    0m1.305s
>> user    0m0.050s
>> sys     0m1.255s
>> 
>> $ time ./test test.2 2
>> total   200000
>> fallocate       100000
>> filewrite       100000
>>
>> real    1m29.222s
>> user    0m0.139s
>> sys     0m3.139s
>
> Now we have fallocate extending the file and doing unwritten extent
> allocation, followed by writing into that unwritten extent which
> then does unwritten extent conversion.
>
> This introduces data vs metadata update ordering constraints to the
> workload.
>
> The problem here in that the "truncate up" operation that
> fallocate is doing to move the file size. The "truncate up" is going
> to move the on-disk file size to the end of the fallocated range via
> a journal transaction, and so it will expose the range of the
> previous write as containing valid data.
>
> However, the previous data write is still only in memory and not on
> disk. The result of journalling the file size change is that if we
> crash after the size change is made but the data is not on disk,
> we end up with lost data - the file contains zeros (or NULLs) where
> the in memory data previously existed.
>
> Go google for "NULL file data exposure" and you'll see this is a
> problem we fixed in ~2006, caused by extending the file size on disk
> without first having written all the in-memory data into the file.

I guess here is the <patch> you are speaking of. So this prevents from
exposing nulls within a file in case of a crash.

I guess the behavior is not the same with ext4. ext4 does not seem to be
doing filemap_write_and_wait_range() if the new i_disksize is more than
oldsize. So then I think ext4 must be ok if in case of a crash the
file has nulls in between. That's why I think the observation of slow
performance is not seen in ext4.

Few queres-
- If the user doesn't issue a flush and if the system crashes, then
  anyways it is not expected that the file will have all the data right?

- Also is that "data/inode size update order" which you are mentioning in
  this patch. Is this something that all filesystems should follow?

- I was wondering what exactly it breaks which the applications depend
  upon? Because not all filesystems tend to follow this practice right?


Thanks for the detailed explaination! I got interested in this thread
after looking at your explaination and since the thread mention this
happens with postgres.

-ritesh

<patch>
[XFS] Fix inode size update before data write in xfs_setattr

When changing the file size by a truncate() call, we log the change in the
inode size. However, we do not flush any outstanding data that might not
have been written to disk, thereby violating the data/inode size update
order. This can leave files full of NULLs on crash.

Hence if we are truncating the file, flush any unwritten data that may lie
between the curret on disk inode size and the new inode size that is being
logged to ensure that ordering is preserved.






[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux