Re: advice with erasure coding

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



VMs on erasure coded SSDs with fast_read work fine since 12.2.2.

Paul

Am Sa., 8. Sep. 2018 um 18:17 Uhr schrieb David Turner <drakonstein@xxxxxxxxx>:
>
> I tested running VMs on EC back in Hammer. The performance was just bad. I didn't even need much io, but even performing standard maintenance was annoying enough that I abandoned the idea. I didn't really try to tweak settings to make it work and I only had a 3 node cluster running 2+1. I did use it for write once/read many data volumes which worked great. I eventually moved away from that on RBDs and migrated into EC on CephFS once that became stable in Jewel. Now on Luminous I've even been able to remove the cache tier I once had in front of all of the EC things.
>
> On Fri, Sep 7, 2018, 5:19 PM Maged Mokhtar <mmokhtar@xxxxxxxxxxx> wrote:
>>
>> On 2018-09-07 13:52, Janne Johansson wrote:
>>
>>
>>
>> Den fre 7 sep. 2018 kl 13:44 skrev Maged Mokhtar <mmokhtar@xxxxxxxxxxx>:
>>>
>>>
>>> Good day Cephers,
>>>
>>> I want to get some guidance on erasure coding, the docs do state the different plugins and settings but to really understand them all and their use cases is not easy:
>>>
>>> -Are the majority of implementations using jerasure and just configuring k and m ?
>>
>>
>> Probably, yes
>>
>>>
>>> -For jerasure: when/if would i need to change stripe_unit/osd_pool_erasure_code_stripe_unit/packetsize/algorithm ? The main usage is rbd with 4M object size, the workload is virtualization with average block size of 64k.
>>>
>>> Any help based on people's actual experience will be greatly appreciated..
>>>
>>>
>>
>> Running VMs on top of EC pools is possible, but probably not recommended.
>> All the random reads and writes they usually cause will make EC less suitable than replicated pools, even if it is possible.
>>
>> --
>> May the most significant bit of your life be positive.
>>
>> Point well taken...it could be useful for backing up vms, and maybe vms without too much latency requirements if k and m are not large.
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux