On Feb 26, 2020, at 1:05 PM, Kirill Tkhai <ktkhai@xxxxxxxxxxxxx> wrote: > > On 26.02.2020 18:55, Christoph Hellwig wrote: >> On Wed, Feb 26, 2020 at 04:41:16PM +0300, Kirill Tkhai wrote: >>> This adds a support of physical hint for fallocate2() syscall. >>> In case of @physical argument is set for ext4_fallocate(), >>> we try to allocate blocks only from [@phisical, @physical + len] >>> range, while other blocks are not used. >> >> Sorry, but this is a complete bullshit interface. Userspace has >> absolutely no business even thinking of physical placement. If you >> want to align allocations to physical block granularity boundaries >> that is the file systems job, not the applications job. > > Why? There are two contradictory actions that filesystem can't do at the same time: > > 1)place files on a distance from each other to minimize number of extents > on possible future growth; > 2)place small files in the same big block of block device. > > At initial allocation time you never know, which file will stop grow in some > future, i.e. which file is suitable for compaction. This knowledge becomes > available some time later. Say, if a file has not been changed for a month, > it is suitable for compaction with another files like it. > > If at allocation time you can determine a file, which won't grow in the future, > don't be afraid, and just share your algorithm here. Very few files grow after they are initially written/closed. Those that do are almost always opened with O_APPEND (e.g. log files). It would be reasonable to have O_APPEND cause the filesystem to reserve blocks (in memory at least, maybe some small amount on disk like 1/4 of the current file size) for the file to grow after it is closed. We might use the same heuristic for directories that grow long after initial creation. The main exception there is VM images, because they are not really "files" in the normal sense, but containers aggregating a lot of different files, each created with patterns that are not visible to the VM host. In that case, it would be better to have the VM host tell the filesystem that the IO pattern is "random" and not try to optimize until the VM is cold. > In Virtuozzo we tried to compact ext4 with existing kernel interface: > > https://github.com/dmonakhov/e2fsprogs/blob/e4defrag2/misc/e4defrag2.c > > But it does not work well in many situations, and the main problem is blocks allocation in desired place is not possible. Block allocator can't behave > excellent for everything. > > If this interface bad, can you suggest another interface to make block > allocator to know the behavior expected from him in this specific case? In ext4 there is already the "group" allocator, which combines multiple small files together into a single preallocation group, so that the IO to disk is large/contiguous. The theory is that files written at the same time will have similar lifespans, but that isn't always true. If the files are large and still being written, the allocator will reserve additional blocks (default 8MB I think) on the expectation that it will continue to write until it is closed. I think (correct me if I'm wrong) that your issue is with defragmenting small files to free up contiguous space in the filesystem? I think once the free space is freed of small files that defragmenting large files is easily done. Anything with more than 8-16MB extents will max out most storage anyway (seek rate * IO size). In that case, an interesting userspace interface would be an array of inode numbers (64-bit please) that should be packed together densely in the order they are provided (maybe a flag for that). That allows the filesystem the freedom to find the physical blocks for the allocation, while userspace can tell which files are related to each other. Tools like "readahead" could also leverage this to "perfectly" allocate the files used during boot into a single stream of reads from the disk. Cheers, Andreas
Attachment:
signature.asc
Description: Message signed with OpenPGP