Performance drop and retransmits with CephFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

Have a question regarding CephFS and write performance. Possibly I am
overlooking a setting.

We recently started using Ceph, where we want to use CephFS as a shared
storage system for a Sync-and-Share solution.
Now we are still in a testing phase, where we are also mainly looking at
the performance of the system, where we are seeing some strange issues.
We are using Ceph Quincy release 17.2.6, with a replica 3 data policy
across 21 hosts spread across 3 locations.

When I write multiple files of 1G, the writing performance drops from
400MiB/s to 18 MiB/s with also multiple retries.
However, when I empty the page caches every minute on the client, the
performance remains good. But that's not really a solution of course.
Have already played a lot with the sysctl settings, like vm.dirty etc, but
it makes no difference at all.

When I enable the fuse_disable_pagecache, the write performance does stay
reasonable at 70MiB/s,
but the read performance completely collapses from 600 MiB/s to 40 MiB/s
There is no difference in behavior between the kernel or fuse client.

Have already played around with client_oc_max_dirty, client_oc_max_objects,
client_oc_size , etc. But haven't found the right setting.
Anyone familiar with this who can give me some hints?

Thanks for your help! :-)

Kind regards, Tom
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux