Re: CephFS: How to figure out which files are affected after a disaster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



cephfs-data-scan can help you.

2017-03-24 13:35 GMT+08:00 Wang, Zhiye <Zhiye.Wang@xxxxxxxxxxxx>:
> Dear all,
>
> I store only one replica (pool size = 1) for my data. And I lose one OSD in my cluster. Is there a way to figure out which files (in CephFS) are actually affected?
>
> "pool size = 1" is just an example for simple. I can also say "pool size = 2, but lose 2 OSD", or "pool size = 3, but lose 3 OSD".
>
>
>
> Thanks
> Zhiye
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux