Hi Sage, > Okay, sorry I took a while to get back to you. Sorry too - most of time I was focused on this problem. > It looks like I gave > you bad advice here! The 'nosnap' files means filestore was > operating in non-snapshotting mode, and the --osd-use-stale-snap > warning that it would lose data was real... it rolled back to an empty > state and threw out the data on the device. :( :( I'm *very* sorry about > this! I haven't looked at or worked with the btrfs mode is ages (we > don't recommend it and almost nobody uses it) but I should have been > paying close attention. Thank You for Your time and effort, it was important to have such help. There were many errors in setup of this cluster. We didn't relize that there could be so much strange things, which were f...ed up... > What is the state of the cluster now? Cluster is dead. After few more days of fight with cluster we decieded to shut it down and we fixed scripts for recovering volumes from turned off ceph cluster (this one: https://github.com/cmgitdream/ceph-rbd-recover-tool) and make it running for jewel version (10.2.7). I setup brand new cluster on other hardware and now images are importing to new cluster. With some direct edition of mysql in openstack systems we didn't had to change everything for our clients from horizon point of view. Once the dust settles, we will add changes to github for this tool. After end of migration we will try to run this dead cluster and make some more agressive action to make it work anyway. -- Regards,, Lukasz -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html