On Tue, Jul 19, 2016 at 9:39 AM, Patrick Donnelly <pdonnell@xxxxxxxxxx> wrote: > On Tue, Jul 19, 2016 at 10:25 AM, Fabiano de O. Lucchese > <flucchese@xxxxxxxxx> wrote: >> I configured the cluster to replicate data twice (3 copies), so these >> numbers fall within my expectations. So far so good, but here's comes the >> issue: I configured CephFS and mounted a share locally on one of my servers. >> When I write data to it, it shows abnormally high performance at the >> beginning for about 5 seconds, stalls for about 20 seconds and then picks up >> again. For long running tests, the observed write throughput is very close >> to what the rados bench provided (about 640 MB/s), but for short-lived >> tests, I get peak performances of over 5GB/s. I know that journaling is >> expected to cause spiky performance patters like that, but not to this >> level, which makes me think that CephFS is buffering my writes and returning >> the control back to client before persisting them to the jounal, which looks >> undesirable. > > The client is buffering the writes to RADOS which would give you the > abnormally high initial performance until the cache needs flushed. You > might try tweaking certain osd settings: > > http://docs.ceph.com/docs/hammer/rados/configuration/osd-config-ref/ > > in particular: "osd client message size cap". Also: I am reasonably sure you don't want to change the message size cap; that's entirely an OSD-side throttle about how much dirty data it lets in before it stops reading off the wire — and I don't think the client feeds back from outgoing data. More likely it's about how much dirty data is being absorbed by the Client before it forces writes out to the OSDs and you want to look at client_oc_size (default 1024*1024*200, aka 200MB) client_oc_max_dirty (default 100MB) client_oc_target_dirty (default 8MB) and turn down the max dirty limits if you're finding it's too bumpy a ride. -Greg > > http://docs.ceph.com/docs/hammer/rados/configuration/journal-ref/ > > -- > Patrick Donnelly > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com