Re: Using straw2 crush also with Hammer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/13/2015 09:11 AM, Karan Singh wrote:
> 
> 
>> On 11 Nov 2015, at 22:49, David Clarke <davidc@xxxxxxxxxxxxxxx> wrote:
>>
>> On 12/11/15 09:37, Gregory Farnum wrote:
>>> On Wednesday, November 11, 2015, Wido den Hollander <wido@xxxxxxxx
>>> <mailto:wido@xxxxxxxx>> wrote:
>>>
>>>    On 11/10/2015 09:49 PM, Vickey Singh wrote:
>>>> On Mon, Nov 9, 2015 at 8:16 PM, Wido den Hollander <wido@xxxxxxxx
>>>    <javascript:;>> wrote:
>>>>
>>>>> On 11/09/2015 05:27 PM, Vickey Singh wrote:
>>>>>> Hello Ceph Geeks
>>>>>>
>>>>>> Need your comments with my understanding on straw2.
>>>>>>
>>>>>>   - Is Straw2 better than straw ?
>>>>>
>>>>> It is not persé better then straw(1).
>>>>>
>>>>> straw2 distributes data better when not all OSDs are equally
>>>>> sized/weighted.
>>>>>
>>>>>>   - Is it straw2 recommended  for production usage ?
>>>>>>
>>>>>
>>>>> I'm using it in multiple clusters and it works fine.
>>>>>
>>>>
>>>> Yes i followed your update on twitter  :)
>>>>
>>>>
>>>>>
>>>>>> I have a production Ceph Firefly cluster , that i am going to
>>>    upgrade to
>>>>>> Ceph hammer pretty soon. Should i use straw2 for all my ceph pools ?
>>>>>>
>>>>>
>>>>> I would upgrade to Hammer first and make sure that ALL clients
>>>    are updated.
>>>>>
>>>>> In case you are using KVM/Qemu, you will have to stop those processes
>>>>> first and start them again before they are using the new code.
>>>>>
>>>>
>>>> Thanks a lot for this pointer, i didn't knew this. So restarting
>>>    KVM / QEMU
>>>> process effect running VMs ? ( some downtime )
>>>>
>>>
>>>    Yes. You can also (live) migrate to another host since that will spawn
>>>    Qemu with fresh code on the other host.
>>>
>>>    But you have to make sure all running/connected clients support straw2
>>>    before you enable straw2.
>>>
>>>
>>> I believe straw2 only requires monitor support -- unlike the tuna led
>>> involved in executing CRUSH, straw2 is just about how the OSD/bucket
>>> weights get converted into a sort of "internal" straw weight. That's
>>> done on the monitors and encoded into the maps.
>>>
>>> Right?
>>> -Greg
>>
>> I don't believe that's the case.  If you convert a CRUSH map to use
>> straw2 then any connected QEMU/librbd clients without straw2 support
>> will die with something like:
>>
>> terminate called after throwing an instance of
>> 'ceph::buffer::malformed_input'
>>  what():  buffer::malformed_input: unsupported bucket algorithm: 5
>>
>> Where:
>>
>> ceph/src/crush/crush.h:CRUSH_BUCKET_STRAW2 = 5,
>>
> 
> Thanks David / Wido for pointing this out.
> 
> So does it means , before changing crush with Straw2 , one should make sure their clients ( openstack or anything else ) supports straw2 ??

Indeed. Make sure that all clients (including the running ones!!)
support straw2.

As mentioned, restart Qemu or live migrate to another host. A reboot
inside the instance is NOT sufficient, librados has to be re-initialized
and this can only be done by stopping and starting Qemu.

> Also does straw2 support comes comes within Kernel OR by installing Ceph Hammer / Later binaries ?

Newer kernels support straw2, but I think you need at least kernel 4.1

> Does Centos 7.1 ( 3.10 ) supports straw2 ?

I don't think so.

Wido

> 
> 
> 
>>
>> -- 
>> David Clarke
>> Systems Architect
>> Catalyst IT
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Wido den Hollander
42on B.V.
Ceph trainer and consultant

Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux