Re: Write i/o in CephFS metadata pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 29/01/2020 10:24, Samy Ascha wrote:
> I've been running CephFS for a while now and ever since setting it up, I've seen unexpectedly large write i/o on the CephFS metadata pool.
>
> The filesystem is otherwise stable and I'm seeing no usage issues.
>
> I'm in a read-intensive environment, from the clients' perspective and throughput for the metadata pool is consistently larger than that of the data pool.
>
> [...]
>
> This might be a somewhat broad question and shallow description, so yeah, let me know if there's anything you would like more details on.

No explanation, but chiming in, as I've seen something similar happen on
my single node "cluster" at home, where I'm exposing a cephfs through
Samba using vfs_ceph, mostly for time machine backups. Running ceph
14.2.6 on debian buster.

I can easily perform debugging operations there, no SLA in place :)

Jasper

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux