Re: Ceph RBD w/erasure coding

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



As a reminder, there's this one waiting ;-)

https://tracker.ceph.com/issues/66641

Frédéric.

PS: For the record, Andre's problem was related to the 'caps' (https://www.reddit.com/r/ceph/comments/1ffzfjc/ceph_rbd_werasure_coding/)

----- Le 15 Sep 24, à 18:02, Anthony D'Atri anthony.datri@xxxxxxxxx a écrit :

> 100% agree.
> 
> I’ve seen claims that SSDs synch quickly so 2 is enough.   Such claims are
> shortsighted.
> 
> I have personally witnessed cases of device failures and OSD flaps / crashes at
> just the wrong times resulted in no clear latest copy of PGs.
> 
> There are cases where data loss isn’t catastrophic, but unless you’re sure, you
> want either R3 for durability or EC to minimize space amp.   A 2,2 profile, for
> example.
> 
>> On Sep 15, 2024, at 10:35 AM, Joachim Kraftmayer <joachim.kraftmayer@xxxxxxxxx>
>> wrote:
>> 
>> first comment on the replicated pools:
>> the replication size for rbd pools of 2 is not suitable for production
>> clusters. It is only a matter of time before you lose data.
>> Joachim
>> 
>> 
>>  www.clyso.com
>> 
>>  Hohenzollernstr. 27, 80801 Munich
>> 
>> Utting a. A. | HR: Augsburg | HRB: 25866 | USt. ID-Nr.: DE2754306
>> 
>> 
>> 
>>> Am Sa., 14. Sept. 2024 um 14:04 Uhr schrieb <przemek.kuczynski@xxxxxxxxx>:
>>> 
>>> Here You have guide
>>> 
>>> https://febryandana.xyz/posts/deploy-ceph-openstack-cluster/
>>> 
>>> in short
>>> 
>>> ceph osd pool create images 128
>>> ceph osd pool set images size 2
>>> while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done
>>> 
>>> ceph osd pool create volumes 128
>>> ceph osd pool set volumes size 2
>>> while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done
>>> 
>>> ceph osd pool create vms 128
>>> ceph osd pool set vms size 2
>>> while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done
>>> 
>>> ceph osd erasure-code-profile set ec-22-profile k=2 m=2
>>> crush-device-class=ssd
>>> ceph osd erasure-code-profile ls
>>> ceph osd erasure-code-profile get ec-22-profile
>>> 
>>> ceph osd pool create images_data 128 128 erasure ec-22-profile
>>> while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done
>>> 
>>> ceph osd pool create volumes_data 128 128 erasure ec-22-profile
>>> while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done
>>> 
>>> ceph osd pool create vms_data 128 128 erasure ec-22-profile
>>> while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done
>>> 
>>> ceph osd pool ls detail
>>> 
>>> ceph osd pool set images_data allow_ec_overwrites true
>>> ceph osd pool set volumes_data allow_ec_overwrites true
>>> ceph osd pool set vms_data allow_ec_overwrites true
>>> 
>>> ceph osd pool application enable volumes rbd
>>> ceph osd pool application enable images rbd
>>> ceph osd pool application enable vms rbd
>>> ceph osd pool application enable volumes_data rbd
>>> ceph osd pool application enable images_data rbd
>>> ceph osd pool application enable vms_data rbd
>>> On ceph.conf You need put as below
>>> 
>>> [client.glance]
>>> rbd default data pool = images_data
>>> 
>>> [client.cinder]
>>> rbd default data pool = volumes_data
>>> 
>>> [client.nova]
>>> rbd default data pool = vms_data
>>> for permission you probably also need add
>>> 
>>> caps mon = "allow r, allow command \\"osd blacklist\\", allow command
>>> \\"osd blocklist\\", allow command \\"blacklistop\\", allow command
>>> \\"blocklistop\\""
>>> Newer versions might not work with blacklist anymore.
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users@xxxxxxx
>>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>> 
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux