Re: I get weird ls pool detail output 12.2.11

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 >
 >>   
 >> 
 >> Hmmm, I am having a daily cron job creating these only on maybe 100
 >> directories. I am removing the snapshot if it exists with a rmdir.
 >> Should I do this differently? Maybe eg use snap-20190101, 
snap-20190102,
 >> snap-20190103 then I will always create unique directories and the 
ones
 >> removed will also be always unique.
 >
 >The names shouldn't matter. If you're creating 100 snapshots then 
having 
 >a removed_snaps with that order of entries may be normal; I'm not sure 

 >how many you really have, since your line was truncated, but at least 
 >600 or so? You might want to go through your snapshots and check that 
 >you aren't leaking old snapshots forever, or deleting the wrong ones.

I also do not exactly know how many I have. It is sort of test setup and
the bash script creates a snapshot every day. So with 100 dirs it will 
be
a maximum of 700. But the script first checks if there is any data with
getfattr --only-values --absolute-names -d -m ceph.dir.rbytes


I don't know what 'leaking old snapshots forever' means, how do I check
 this is happening? I am quite confident that the bash script only 
creates and removes the snap dirs as it should.

Is it not strange that the snaps are shown on fs data pools I am not
 using? fs_data has indeed snapshots, fs_data.ec21.ssd is empty

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux