Re: Yet another performance tuning for CephFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Patrick.

Thank you for prompt response.

I added ceph.conf file but i think you missed it.

These are the configs i tuned: (also i disabled debug logs in global section). Correct me if i understand you wrongly on this.

Btw before i gave you config i want to answer on sync io. Yes if i remove oflag then it goes to 1.1gb/s. Very fast indeed.

But lets try another. Lets say i have a file in my server which is 5GB. If i do this:

$ rsync ./bigfile /mnt/cephfs/targetfile --progress

Then i see max. 200 mb/s. I think it is still slow :/ Is this an expected?

Am i doing something wrong here?

Anyway, here is configs for osd i tried to tune.

[osd]

osd max write size = 512

osd client message size cap = 2147483648

osd mount options xfs = rw,noexec,nodev,noatime,nodiratime,nobarrier

filestore xattr use omap = true

osd_op_threads = 8

osd disk threads = 4

osd map cache size = 1024

filestore_queue_max_ops = 25000

filestore_queue_max_bytes = 10485760

filestore_queue_committing_max_ops = 5000

filestore_queue_committing_max_bytes = 10485760000

journal_max_write_entries = 1000

journal_queue_max_ops = 3000

journal_max_write_bytes = 1048576000

journal_queue_max_bytes = 1048576000

filestore_max_sync_interval = 15

filestore_merge_threshold = 20

filestore_split_multiple = 2

osd_enable_op_tracker = false

filestore_wbthrottle_enable = false

osd_client_message_size_cap = 0

osd_client_message_cap = 0

filestore_fd_cache_size = 64

filestore_fd_cache_shards = 32

filestore_op_threads = 12






On 2017-07-17 22:41, Patrick Donnelly wrote:
Hi Gencer,

On Mon, Jul 17, 2017 at 12:31 PM,  <gencer@xxxxxxxxxxxxx> wrote:
I located and applied almost every different tuning setting/config over the internet. I couldn’t manage to speed up my speed one byte further. It is
always same speed whatever I do.

I believe you're frustrated but this type of information isn't really
helpful. Instead tell us which config settings you've tried tuning.

I have 2 nodes with 10 OSD each and each OSD is 3TB SATA drive. Each node has 24 cores and 64GB of RAM. Ceph nodes are connected via 10GbE NIC. No
FUSE used. But tried that too. Same results.



$ dd if=/dev/zero of=/mnt/c/testfile bs=100M count=10 oflag=direct

This looks like your problem: don't use oflag=direct. That will cause
CephFS to do synchronous I/O at great cost to performance in order to
avoid buffering by the client.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux