Thank you for the detailed explanation! The overhead for metadata is what I expected. However, I wasn’t aware of the default protection period of one hour, also not of the concept of segments. Now, a few days later, I get: $ df -h /bigstore/ Filesystem Size Used Avail Use% Mounted on /dev/mapper/bigstore 3.5T 2.7T 699G 80% /bigstore $ du -sh /bigstore/ 2.5T /bigstore/ The used space reported by `df -f` is not 2.7T vs 3.0T a few days ago. Back then I was apparently too close to a major file operation. I had geotagged tens of thousands of raw image files, modifying them directly (exif headers). Should the following command have freed up diskspace? # nilfs-clean -S 20/0.1 --protection-period=0 /bigstore I realize it doesn’t reduce the number of checkpoints. I really am a n00b when it comes to log structured file systems. I just want to use NILFS2 for the ability to revert accidental file changes. One more question, as you wrote: > Incidentally, the reason why the df output (used capacity) of NILFS is > calculated from the used segments and not the number of used blocks is > because the blocks in use on NILFS change dynamically depending on the > conditions, making it difficult to respond immediately. If the > dissociation is large, I think some kind of algorithm should be > introduced to improve it. > > The actual blocks in use should be able to be calculated as follows > using the output of "lssu -l" (when the block size is 4KiB). For your > reference. > > $ sudo lssu -l -p 0 | awk 'NR>1{sum+=$6}END{print sum*4096}' | numfmt --to=iec-i Certainly interesting! But, I assume, without garbage collection I cannot use the space in sparse segments anyhow. So `df` should give me the space that currently is available for actual use. Do I understand that correctly?