Re: CephFS recovery from missing metadata objects questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Dec 7, 2016 at 3:28 PM, Wido den Hollander <wido@xxxxxxxx> wrote:
> (I think John knows the answer, but sending to ceph-users for archival purposes)
>
> Hi John,
>
> A Ceph cluster lost a PG with CephFS metadata in there and it is currently doing a CephFS disaster recovery as described here: http://docs.ceph.com/docs/master/cephfs/disaster-recovery/

I wonder if this has any relation to your thread about size=2 pools ;-)

> This data pool has 1.4B objects and currently has 16 concurrent scan_extents scans running:
>
> # cephfs-data-scan --debug-rados=10 scan_extents --worker_n 0 --worker_m 16 cephfs_metadata
> # cephfs-data-scan --debug-rados=10 scan_extents --worker_n 1 --worker_m 16 cephfs_metadata
> ..
> ..
> # cephfs-data-scan --debug-rados=10 scan_extents --worker_n 15 --worker_m 16 cephfs_metadata
>
> According to the source in DataScan.cc:
> * worker_n: Worker number
> * worker_m: Worker count
>
> So with the commands above I have 16 workers running, correct? For the scan_inodes I want to scale out to 32 workers to speed up the process even more.
>
> Just to double-check before I send a new PR to update the docs, this is the right way to run the tool, correct?

It looks like you're targeting cephfs_metadata instead of your data pool.

scan_extents and scan_inodes operate on data pools, even if your goal
is to rebuild your metadata pool (the argument is what you are
scanning, not what you are writing to).

There is also a "scan_frags" command that operates on a metadata pool.

John

> If not, before sending the PR and starting scan_inodes on this cluster, what is the correct way to invoke the tool?
>
> Thanks!
>
> Wido
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux