cephfs-data-scan can help you. 2017-03-24 13:35 GMT+08:00 Wang, Zhiye <Zhiye.Wang@xxxxxxxxxxxx>: > Dear all, > > I store only one replica (pool size = 1) for my data. And I lose one OSD in my cluster. Is there a way to figure out which files (in CephFS) are actually affected? > > "pool size = 1" is just an example for simple. I can also say "pool size = 2, but lose 2 OSD", or "pool size = 3, but lose 3 OSD". > > > > Thanks > Zhiye > > > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html