https://bugzilla.kernel.org/show_bug.cgi?id=205833 --- Comment #1 from bfoster@xxxxxxxxxx --- On Wed, Dec 11, 2019 at 02:03:52PM +0000, bugzilla-daemon@xxxxxxxxxxxxxxxxxxx wrote: > https://bugzilla.kernel.org/show_bug.cgi?id=205833 > > Bug ID: 205833 > Summary: fsfreeze blocks close(fd) on xfs sometimes > Product: File System > Version: 2.5 > Kernel Version: 4.15.0-55-generic #60-Ubuntu > Hardware: Intel > OS: Linux > Tree: Mainline > Status: NEW > Severity: normal > Priority: P1 > Component: XFS > Assignee: filesystem_xfs@xxxxxxxxxxxxxxxxxxxxxx > Reporter: kernel.org@xxxxxxxxx > Regression: No > > Dear all > > I noticed the bug while setting up a backup with fsfreeze and restic. > > How I reproduce it: > > 1. Write multiple MB to a file (eg. 100MB) while after one or two MB > freeze > the filesystem from the sidecar pod > 2. From the sidecar pod, issue multiple `strace tail > /generated/data/0.txt` > 3. After a couple of tries strace shows that the `read(...)` works but > `close(...)` hangs > 4. From now on all `read(...)` operations are blocked until the freeze is > lifted > I'm not familiar with your user environment, but it sounds like the use case is essentially to read a file concurrently being written to and freeze the fs. From there, you're expecting the readers to exit but instead observe them blocked on close(). The ceaveat to note here is that close() is not necessarily a read-only operation from the perspective of XFS internals. A close() (or ->release() from the fs perspective) can do things like truncate post-eof block allocation, which requires a transaction and thus blocks on a frozen fs. To confirm, could you post a stack trace of one of your blocked reader tasks (i.e. 'cat /proc/<pid>/stack')? I'm not necessarily sure blocking here is a bug if that is the situation. We most likely wouldn't want to skip post-eof truncation on a file simply because the fs was frozen. That said, I thought Dave had proposed patches at one point to mitigate free space fragmentation side effects of post-eof truncation, and one such patch was to skip the truncation on read-only fds. I'll have to dig around or perhaps Dave can chime in, but I'm curious if that would also help with this use case.. Brian > System: Ubuntu 18.04.3 LTS > CPU: Intel(R) Xeon(R) CPU X5650 @ 2.67GHz > Storage: /dev/mapper/mpathXX on /var/lib/kubelet/plugins/hpe.com/... type xfs > (rw,noatime,attr2,inode64,noquota) > > I used this tool to generate the file. The number of concurrent files does > not > appear to matter that much. I was able to trigger the bug, tested with 2, 4 > and > 32 parallel files: > https://gitlab.com/dns2utf8/multi_file_writer > > Cheers, > Stefan > > PS: I opened a bug at the tool vendor too: > https://github.com/vmware-tanzu/velero/issues/2113 > > -- > You are receiving this mail because: > You are watching the assignee of the bug. > -- You are receiving this mail because: You are watching the assignee of the bug.