I think I realized what went wrong: I compressed my filesystem _after_
already having done some snapshots. I think it then duplicated all my
files and basically filled my filesystem... And I did
Sorry for that! I'm happy to be wrong at least. And thank you for this
great answer!
Le 19/04/2021 à 23:29, Dominique Martinet a écrit :
Lyes Saadi wrote on Mon, Apr 19, 2021 at 10:56:51PM +0100:
It's a bit late to ask this question, but it emerged when I noticed that
after upgrading my PC to Silverblue 34 and after compressing manually my
files, and doing some snapshots, rpm-ostree began complaining about the
absence of free space... While compsize reported that I used only 84G(o/io?)
of my 249Go filesystem... I then realized that because of the compression
and the snapshots, ostree thought that my disk was full. The same problem
happened with gnome-disk. I reported both issues[1][2].
Err, no.
btrfs has been reporting proper things in statfs that programs can rely
on, compsize is only there for you if you're curious and for debugging.
In this case your filesystem is really almost full (around 8GB free
according to your output)
That was a problem very early on and basically everyone complained df
being unuseable would break too many programs.
You probably compressed your / but have snapshots laying around that
still take up space and weren't considered in your compsize command?
If you don't trust df (statfs), you have two btrfs commands to look at
for more details; here's what it gives on my system:
# btrfs fi df /
Data, single: total=278.36GiB, used=274.63GiB
System, DUP: total=32.00MiB, used=48.00KiB
Metadata, DUP: total=9.29GiB, used=6.88GiB
GlobalReserve, single: total=512.00MiB, used=0.00B
# btrfs fi usage /
Overall:
Device size: 330.00GiB
Device allocated: 297.00GiB
Device unallocated: 33.00GiB
Device missing: 0.00B
Used: 288.39GiB
Free (estimated): 36.73GiB (min: 20.23GiB)
Free (statfs, df): 36.73GiB
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 512.00MiB (used: 0.00B)
Multiple profiles: no
Data,single: Size:278.36GiB, Used:274.63GiB (98.66%)
/dev/mapper/slash 278.36GiB
Metadata,DUP: Size:9.29GiB, Used:6.88GiB (74.09%)
/dev/mapper/slash 18.57GiB
System,DUP: Size:32.00MiB, Used:48.00KiB (0.15%)
/dev/mapper/slash 64.00MiB
Unallocated:
/dev/mapper/slash 33.00GiB
And for comparison:
# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/slash 330G 289G 37G 89% /
In all cases, the Used column actually corresponds to compressed size --
real blocks on disk and not uncompressed data size.
I have way too many subvolumes but here's an output that lists more than
289G "used"; I'm lazy so without snapshots:
# compsize -x / /home /var /var/lib/machines/ /nix
Processed 2722869 files, 1820146 regular extents (2063805 refs), 1625123 inline.
Type Perc Disk Usage Uncompressed Referenced
TOTAL 76% 232G 302G 317G
none 100% 196G 196G 194G
zstd 33% 34G 104G 122G
prealloc 100% 1.0G 1.0G 553M
Hm, not very convincing, adding a few (there's more, I guess adding all
of them should bring the Disk Usage column up to 289G but this just
takes too long for this mail -- the "proper" way to track snapshot usage
would be quota but I don't have these enabled here):
# compsize -x / /home /var /var/lib/machines/ /nix /.snapshots/{19,20}*/snapshot
Processed 10803451 files, 2110568 regular extents (7656942 refs), 5960388 inline.
Type Perc Disk Usage Uncompressed Referenced
TOTAL 75% 249G 331G 732G
none 100% 206G 206G 281G
zstd 33% 41G 123G 451G
prealloc 100% 1.0G 1.0G 551M
I would suggest finding what subvolumes you may have (btrfs subvolume
list /) and cleanup old ones, I'm not sure what is used by default
nowadays (snapper?) there might be higher level commands
They might not be visible from your mountpoint if your setup mounts a
subvolume by default, in which case you can mount your btrfs volume
somewhere else with -o subvol=/ for example to show everything and play
with compsize if you want.
_______________________________________________
devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx
Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure