Re: Yet another performance tuning for CephFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>> Not for 10GbE, but for public vs cluster network, for example:

Applied. Thanks!

>> Then I'm not sure what to expect... probably poor performance with sync writes on filestore, and not sure what would happen with
>> bluestore...
>> probably much better than filestore though if you use a large block size.

At the moment, It looks good but, can you explain a bit more on block size? (or a reference page could also work)

Gencer.

-----Original Message-----
From: Peter Maloney [mailto:peter.maloney@xxxxxxxxxxxxxxxxxxxx] 
Sent: Tuesday, July 18, 2017 5:59 PM
To: Gencer W. Genç <gencer@xxxxxxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  Yet another performance tuning for CephFS

On 07/18/17 14:10, Gencer W. Genç wrote:
>>> Are you sure? Your config didn't show this.
> Yes. I have dedicated 10GbE network between ceph nodes. Each ceph node has seperate network that have 10GbE network card and speed. Do I have to set anything in the config for 10GbE?
Not for 10GbE, but for public vs cluster network, for example:

> public network = 10.10.10.0/24
> cluster network = 10.10.11.0/24

Mainly this is for replication performance.

And using jumbo frames (high MTU, like 9000, on hosts and higher on
switches) also increases performance a bit (especially on slow CPUs in theory). That's also not in the ceph.conf.

>>> What kind of devices are they? did you do the journal test?
> They are not connected via NVMe neither SSD's. Each node has 10x3TB SATA Hard Disk Drives (HDD).
Then I'm not sure what to expect... probably poor performance with sync writes on filestore, and not sure what would happen with bluestore...
probably much better than filestore though if you use a large block size.
>
>
> -Gencer.
>
>
> -----Original Message-----
> From: Peter Maloney [mailto:peter.maloney@xxxxxxxxxxxxxxxxxxxx]
> Sent: Tuesday, July 18, 2017 2:47 PM
> To: gencer@xxxxxxxxxxxxx
> Cc: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  Yet another performance tuning for CephFS
>
> On 07/17/17 22:49, gencer@xxxxxxxxxxxxx wrote:
>> I have a seperate 10GbE network for ceph and another for public.
>>
> Are you sure? Your config didn't show this.
>
>> No they are not NVMe, unfortunately.
>>
> What kind of devices are they? did you do the journal test?
> http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-s
> sd-is-suitable-as-a-journal-device/
>
> Unlike most tests, with ceph journals, you can't look at the load on the device and decide it's not the bottleneck; you have to test it another way. I had some micron SSDs I tested which performed poorly, and that test showed them performing poorly too. But from other benchmarks, and disk load during journal tests, they looked ok, which was misleading.
>> Do you know any test command that i can try to see if this is the max.
>> Read speed from rsync?
> I don't know how you can improve your rsync test.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux