> > My Ceph cluster has a CephFS file system, using an erasure-code data pool (k=8, m=2), which has used 14TiB of space. My CephFS has 19 subvolumes, and each subvolume automatically creates a snapshot every day and keeps it for 3 days. The problem is that when I manually calculate the disk space usage of each subvolume directory in CephFS, the total amount is only 8.4TiB. I don't know why this is happening. Do snapshots take up a lot of space? Snapshot consumption of underlying storage is a function of how much data is written / removed. Do you have a lot of fairly small files on your CephFS? _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx