Re: I get weird ls pool detail output 12.2.11

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/02/2019 20:21, Marc Roos wrote:
I also do not exactly know how many I have. It is sort of test setup and
the bash script creates a snapshot every day. So with 100 dirs it will
be
a maximum of 700. But the script first checks if there is any data with
getfattr --only-values --absolute-names -d -m ceph.dir.rbytes


I don't know what 'leaking old snapshots forever' means, how do I check
  this is happening? I am quite confident that the bash script only
creates and removes the snap dirs as it should.

Is it not strange that the snaps are shown on fs data pools I am not
  using? fs_data has indeed snapshots, fs_data.ec21.ssd is empty

I think the snapshots IDs will apply to all pools in the FS regardless of whether they contain any data referenced by the snapshots.

I just tested this and it seems each CephFS snapshot consumes two snapshots in the underlying pools, one apparently created on deletion (I wasn't aware of this). So for ~700 snapshots the output you're seeing is normal. It seems that using a "rolling snapshot" pattern in CephFS inherently creates a "one present, one deleted" pattern in the underlying pools.

--
Hector Martin (hector@xxxxxxxxxxxxxx)
Public Key: https://mrcn.st/pub
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux