On Sat, Jan 11, 2025 at 09:17:49AM +0000, Artem S. Tashkinov wrote: > Hello, > > I had this idea on 2021-11-07, then I thought it was wrong/stupid, now > I've asked AI and it said it was actually not bad, so I'm bringing it > forward now: > > Imagine the following scenarios: > > * You need to delete tens of thousands of files. > * You need to change the permissions, ownership, or security context > (chmod, chown, chcon) for tens of thousands of files. > * You need to update timestamps for tens of thousands of files. > > All these operations are currently relatively slow because they are > executed sequentially, generating significant I/O overhead. > > What if these operations could be spooled and performed as a single > transaction? By bundling metadata updates into one atomic operation, > such tasks could become near-instant or significantly faster. This would > also reduce the number of writes, leading to less wear and tear on > storage devices. As Amir has stated, pretty much all journalled file systems will combine a large number of file system operations into a single transation, unless there is an explicit request via an fsync(2) system call. For example, ext4 in general only closes a journal transaction every five seconds, or there isn't enough space in the journal (athough in practice this isn't an issue if you are using a reasonably modern mkfs.ext4, since we've increased the default size of the journal). The reason why deleting a large number of files, or changing the permissions, ownership, timestamps, etc., of a large number of files is because you need to read the directory blocks to find the inodes that you need to modify, read a large number of inodes, update a large number of inodes, and if you are deleting the inodes, also update the block allocation metadata (bitmaps, or btrees) so that those blocks are marked as no longer in use. Some of the directory entries might be cached in the dentry cache, and some of the inodes might be cached in the inode cache, but that's not always the case. If all of the metadata blocks that you need to read in order to accomplish the operation are already cached in memory, then what you propose is something that pretty much all journaled file systems will do already, today. That is, the modifications that need to be made to the metadata will be first written to the journal first, and only after the journal transaction has been committed, will the actual metadata blocks be written to the storage device, and this will be done asynchronously. In pratice, the actual delay in doing one of these large operations is the need to read the metadata blocks into memory, and this must be done synchronously, since (for example), if you are deleting 100,000 files, you first need to know which inodes for those 100,000 files by reading the directory blocks; you then need to know which blocks will be freed by deleting each of those 100,000 files, which means you will need to read 100,000 inodes and their extent tree blocks, and then you need to update the block allocation information, and that will require that you read the block allocation bitmaps so they can be updated. > Does this idea make sense? If it already exists, or if there’s a reason > it wouldn’t work, please let me know. So yes, it basically exists, although in practice, it doesn't work as well as you might think, because of the need to read potentially a large number of the metdata blocks. But for example, if you make sure that all of the inode information is already cached, e.g.: ls -lR /path/to/large/tree > /dev/null Then the operation to do a bulk update will be fast: time chown -R root:root /path/to/large/tree This demonstrates that the bottleneck tends to be *reading* the metdata blocks, not *writing* the metadata blocks. Cheers, - Ted