write performance per disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/04/2014 11:40 AM, VELARTIS Philipp D?rhammer wrote:
> I use between 1 and 128 in different steps...
> But 500mb write is the max playing around.
>

I just mentioned it in a different thread, make sure you do parallel 
I/O! That's where Ceph really makes the difference. Run rados bench from 
multiple clients.

> Uff its so hard to tune ceph... so many people have problems... ;-)

No, Ceph is simply different from any other storage. Distributed storage 
is a lot different in terms of performance from existing storage 
projects/products.

Wido

>
> -----Urspr?ngliche Nachricht-----
> Von: Wido den Hollander [mailto:wido at 42on.com]
> Gesendet: Freitag, 04. Juli 2014 10:55
> An: VELARTIS Philipp D?rhammer; ceph-users at lists.ceph.com
> Betreff: Re: AW: [ceph-users] write performance per disk
>
> On 07/03/2014 04:32 PM, VELARTIS Philipp D?rhammer wrote:
>> HI,
>>
>> Ceph.conf:
>>          osd journal size = 15360
>>          rbd cache = true
>>           rbd cache size = 2147483648
>>           rbd cache max dirty = 1073741824
>>           rbd cache max dirty age = 100
>>           osd recovery max active = 1
>>            osd max backfills = 1
>>            osd mkfs options xfs = "-f -i size=2048"
>>            osd mount options xfs = "rw,noatime,nobarrier,logbsize=256k,logbufs=8,inode64,allocsize=4M"
>>            osd op threads = 8
>>
>> so it should be 8 threads?
>>
>
> How many threads are you using with rados bench? Don't touch the op threads from the start, usually the default is just fine.
>
>> All 3 machines have more or less the same disk load at the same time.
>> also the disks:
>> sdb              35.56        87.10      6849.09     617310   48540806
>> sdc              26.75        72.62      5148.58     514701   36488992
>> sdd              35.15        53.48      6802.57     378993   48211141
>> sde              31.04        79.04      6208.48     560141   44000710
>> sdf              32.79        38.35      6238.28     271805   44211891
>> sdg              31.67        77.84      5987.45     551680   42434167
>> sdh              32.95        51.29      6315.76     363533   44761001
>> sdi              31.67        56.93      5956.29     403478   42213336
>> sdj              35.83        77.82      6929.31     551501   49109354
>> sdk              36.86        73.84      7291.00     523345   51672704
>> sdl              36.02       112.90      7040.47     800177   49897132
>> sdm              33.25        38.02      6455.05     269446   45748178
>> sdn              33.52        39.10      6645.19     277101   47095696
>> sdo              33.26        46.22      6388.20     327541   45274394
>> sdp              33.38        74.12      6480.62     525325   45929369
>>
>>
>> the question is: is this a poor performance to get max 500mb/write with 45 disks and replica 2 or should I expect this?
>>
>
> You should be able to get more as long as the I/O is done in parallel.
>
> Wido
>
>>
>> -----Urspr?ngliche Nachricht-----
>> Von: ceph-users [mailto:ceph-users-bounces at lists.ceph.com] Im Auftrag
>> von Wido den Hollander
>> Gesendet: Donnerstag, 03. Juli 2014 15:22
>> An: ceph-users at lists.ceph.com
>> Betreff: Re: [ceph-users] write performance per disk
>>
>> On 07/03/2014 03:11 PM, VELARTIS Philipp D?rhammer wrote:
>>> Hi,
>>>
>>> I have a ceph cluster setup (with 45 sata disk journal on disks) and
>>> get only 450mb/sec writes seq (maximum playing around with threads in
>>> rados
>>> bench) with replica of 2
>>>
>>
>> How many threads?
>>
>>> Which is about ~20Mb writes per disk (what y see in atop also)
>>> theoretically with replica2 and having journals on disk should be 45
>>> X 100mb (sata) / 2 (replica) / 2 (journal writes) which makes it 1125
>>> satas in reality have 120mb/sec so the theoretical output should be more.
>>>
>>> I would expect to have between 40-50mb/sec for each sata disk
>>>
>>> Can somebody confirm that he can reach this speed with a setup with
>>> journals on the satas (with journals on ssd speed should be 100mb per disk)?
>>> or does ceph only give about ? of the speed for a disk? (and not the
>>> ? as expected because of journals)
>>>
>>
>> Did you verify how much each machine is doing? It could be that the data is not distributed evenly and that on a certain machine the drives are doing 50MB/sec.
>>
>>> My setup is 3 servers with: 2 x 2.6ghz xeons, 128gb ram 15 satas for
>>> ceph (and ssds for system) 1 x 10gig for external traffic, 1 x 10gig
>>> for osd traffic with reads I can saturate the network but writes is
>>> far away. And I would expect at least to saturate the 10gig with
>>> sequential writes also
>>>
>>
>> Should be possible, but with 3 servers the data distribution might not be optimal causing a lower write performance.
>>
>> I've seen 10Gbit write performance on multiple clusters without any problems.
>>
>>> Thank you
>>>
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users at lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>>
>> --
>> Wido den Hollander
>> Ceph consultant and trainer
>> 42on B.V.
>>
>> Phone: +31 (0)20 700 9902
>> Skype: contact42on
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
> --
> Wido den Hollander
> 42on B.V.
> Ceph trainer and consultant
>
> Phone: +31 (0)20 700 9902
> Skype: contact42on
>


-- 
Wido den Hollander
42on B.V.
Ceph trainer and consultant

Phone: +31 (0)20 700 9902
Skype: contact42on


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux