Re: Time Estimation for cephfs-data-scan scan_links

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[...]
> What is being done is a serial tree walk and copy in 3
> replicas of all objects in the CephFS metadata pool, so it
> depends on both the read and write IOPS rate for the metadata
> pools, but mostly in the write IOPS. [...] Wild guess:
> metadata is on 10x 3.84TB SSDs without persistent cache, data
> is on 48x 8TB devices probably HDDs. Very cost effective :-).

I do not know if those guesses are right, but in general most
Ceph instances I have seen have been designed with the "cost
effective" choice of providing enough IOPS to run the user
workload (but often not even that), but not also more to be able
to run the admin workload quickly (checking, scanning,
scrubbing, migrating, 'fsck' or 'resilvering' of the underlying
filesystem). There is often a similar situation for non HPC
filesystem types, but the scale and pressure on instances of
those are usually much lower than for HPC filesystem instances,
so the consequencesa are less obvious.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux