Re: RBD fio Performance concerns

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In my test it was just recovering some replicas not the whole osd.

Am 22.11.2012 um 16:35 schrieb Alexandre DERUMIER <aderumier@xxxxxxxxx>:

>>> But who cares? it's also on the 2nd node. or even on the 3rd if you have 
>>> replicas 3.
> Yes, but rebuilding a dead node use cpu and ios. (but it should be benched too, to see the impact on the production)
> 
> 
> 
> ----- Mail original ----- 
> 
> De: "Stefan Priebe - Profihost AG" <s.priebe@xxxxxxxxxxxx> 
> À: "Alexandre DERUMIER" <aderumier@xxxxxxxxx> 
> Cc: "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx>, "Mark Kampe" <mark.kampe@xxxxxxxxxxx>, "Sébastien Han" <han.sebastien@xxxxxxxxx>, "Mark Nelson" <mark.nelson@xxxxxxxxxxx> 
> Envoyé: Jeudi 22 Novembre 2012 16:28:57 
> Objet: Re: RBD fio Performance concerns 
> 
> Am 22.11.2012 16:26, schrieb Alexandre DERUMIER: 
>>>> Haven't tested that. But does this makes sense? I mean data goes to Disk 
>>>> journal - same disk then has to copy the Data from part A to part B. 
>>>> 
>>>> Why is this an advantage?
>> 
>> Well, if you are cpu limited, I don't think you can use all 8*35000iops by node. 
>> So, maybe a benchmark can tell us if the difference is really big. 
>> 
>> Using tmpfs and ups can be ok, but if you have a kernel panic or hardware problem, you'll lost your journal.
> 
> But who cares? it's also on the 2nd node. or even on the 3rd if you have 
> replicas 3. 
> 
> Stefan 
> 
> 
>> ----- Mail original ----- 
>> 
>> De: "Stefan Priebe - Profihost AG" <s.priebe@xxxxxxxxxxxx> 
>> À: "Mark Nelson" <mark.nelson@xxxxxxxxxxx> 
>> Cc: "Alexandre DERUMIER" <aderumier@xxxxxxxxx>, "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx>, "Mark Kampe" <mark.kampe@xxxxxxxxxxx>, "Sébastien Han" <han.sebastien@xxxxxxxxx> 
>> Envoyé: Jeudi 22 Novembre 2012 16:01:56 
>> Objet: Re: RBD fio Performance concerns 
>> 
>> Am 22.11.2012 15:46, schrieb Mark Nelson: 
>>> I haven't played a whole lot with SSD only OSDs yet (other than noting 
>>> last summer that iop performance wasn't as high as I wanted it). Is a 
>>> second partition on the SSD for the journal not an option for you?
>> 
>> Haven't tested that. But does this makes sense? I mean data goes to Disk 
>> journal - same disk then has to copy the Data from part A to part B. 
>> 
>> Why is this an advantage? 
>> 
>> Stefan 
>> 
>>> Mark 
>>> 
>>> On 11/22/2012 08:42 AM, Stefan Priebe - Profihost AG wrote: 
>>>> Am 22.11.2012 15:37, schrieb Mark Nelson: 
>>>>> I don't think we recommend tmpfs at all for anything other than playing 
>>>>> around. :)
>>>> 
>>>> I discussed this with somebody frmo inktank. Had to search the 
>>>> mailinglist. It might be OK if you're working with enough replicas and 
>>>> UPS. 
>>>> 
>>>> I see no other option while working with SSDs - the only Option would be 
>>>> to be able to deaktivate the journal at all. But ceph does not support 
>>>> this. 
>>>> 
>>>> Stefan 
>>>> 
>>>>> On 11/22/2012 08:22 AM, Stefan Priebe - Profihost AG wrote: 
>>>>>> Hi, 
>>>>>> 
>>>>>> can someone from inktank comment this? Might be using /dev/ram0 with an 
>>>>>> fs on it be better than tmpfs as we can use dio? 
>>>>>> 
>>>>>> Greets, 
>>>>>> Stefan 
>>>>>> 
>>>>>>> ----- Mail original ----- 
>>>>>>> 
>>>>>>> De: "Stefan Priebe - Profihost AG" <s.priebe@xxxxxxxxxxxx> 
>>>>>>> À: "Sébastien Han" <han.sebastien@xxxxxxxxx> 
>>>>>>> Cc: "Mark Nelson" <mark.nelson@xxxxxxxxxxx>, "Alexandre DERUMIER" 
>>>>>>> <aderumier@xxxxxxxxx>, "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx>, 
>>>>>>> "Mark Kampe" <mark.kampe@xxxxxxxxxxx> 
>>>>>>> Envoyé: Jeudi 22 Novembre 2012 14:29:03 
>>>>>>> Objet: Re: RBD fio Performance concerns 
>>>>>>> 
>>>>>>> Am 22.11.2012 14:22, schrieb Sébastien Han: 
>>>>>>>> And RAMDISK devices are too expensive. 
>>>>>>>> 
>>>>>>>> It would make sense in your infra, but yes they are really expensive.
>>>>>>> 
>>>>>>> We need something like tmpfs - running in local memory but support 
>>>>>>> dio. 
>>>>>>> 
>>>>>>> Stefan
>>> 
>>> 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux