Hello, list.
Have anybody been in the situation when after "ceph fs reset" filesystem
becomes blank (mounts OK, ls shows no files/directories), but data and
metadata pools still hold something (698G and 400M respectively by "ceph
fs status").
Would be grateful for documentation vectors and/or suggestions.
Maybe i remember wrong, but few times in the past same "ceph fs reset"
produced minor corruption to recent filesystem changes.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx